commit cc4802b975cdd8e92e23068be60e17a77e1713bb Author: <> Date: Wed Dec 3 09:57:35 2025 +0000 Deployed c8f3a84 with MkDocs version: 1.6.1 diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..bec49267 --- /dev/null +++ b/404.html @@ -0,0 +1,3899 @@ + + + + + + + + + + + + + + + + + + + + + + + + NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ +

404 - Not found

+ +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API/index.html b/API/index.html new file mode 100644 index 00000000..09dee4da --- /dev/null +++ b/API/index.html @@ -0,0 +1,4210 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

API Documentation

+

This API provides programmatic access to devices, events, sessions, metrics, network tools, and sync in NetAlertX. It is implemented as a REST and GraphQL server. All requests require authentication via API Token (API_TOKEN setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a Authorization: Bearer API_TOKEN header as per example below:

+
curl 'http://host:GRAPHQL_PORT/graphql' \
+  -X POST \
+  -H 'Authorization: Bearer API_TOKEN' \
+  -H 'Content-Type: application/json' \
+  --data '{
+    "query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
+    "variables": {
+      "options": {
+        "page": 1,
+        "limit": 10,
+        "sort": [{ "field": "devName", "order": "asc" }],
+        "search": "",
+        "status": "connected"
+      }
+    }
+  }'
+
+

The API server runs on 0.0.0.0:<graphql_port> with CORS enabled for all main endpoints.

+
+

Authentication

+

All endpoints require an API token provided in the HTTP headers:

+
Authorization: Bearer <API_TOKEN>
+
+

If the token is missing or invalid, the server will return:

+
{ "error": "Forbidden" }
+
+
+

Base URL

+
http://<server>:<GRAPHQL_PORT>/
+
+
+

Endpoints

+
+

Tip

+

When retrieving devices or settings try using the GraphQL API endpoint first as it is read-optimized.

+
+
    +
  • Device API Endpoints – Manage individual devices
  • +
  • Devices Collection – Bulk operations on multiple devices
  • +
  • Events – Device event logging and management
  • +
  • Sessions – Connection sessions and history
  • +
  • Settings – Settings
  • +
  • Messaging: +
  • +
  • Metrics – Prometheus metrics and per-device status
  • +
  • Network Tools – Utilities like Wake-on-LAN, traceroute, nslookup, nmap, and internet info
  • +
  • Online History – Online/offline device records
  • +
  • GraphQL – Advanced queries and filtering for Devices, Settings and Language Strings
  • +
  • Sync – Synchronization between multiple NetAlertX instances
  • +
  • Logs – Purging of logs and adding to the event execution queue for user triggered events
  • +
  • DB query (⚠ Internal) - Low level database access - use other endpoints if possible
  • +
+

See Testing for example requests and usage.

+
+

Notes

+
    +
  • All endpoints enforce Bearer token authentication.
  • +
  • Errors return JSON with success: False and an error message.
  • +
  • GraphQL is available for advanced queries, while REST endpoints cover structured use cases.
  • +
  • Endpoints run on 0.0.0.0:<GRAPHQL_PORT> with CORS enabled.
  • +
  • Use consistent API tokens and node/plugin names when interacting with /sync to ensure data integrity.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_DBQUERY/index.html b/API_DBQUERY/index.html new file mode 100644 index 00000000..0f533446 --- /dev/null +++ b/API_DBQUERY/index.html @@ -0,0 +1,4632 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + DB query - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Database Query API

+

The Database Query API provides direct, low-level access to the NetAlertX database. It allows read, write, update, and delete operations against tables, using base64-encoded SQL or structured parameters.

+
+

Warning

+

This API is primarily used internally to generate and render the application UI. These endpoints are low-level and powerful, and should be used with caution. Wherever possible, prefer the standard API endpoints. Invalid or unsafe queries can corrupt data. +If you need data in a specific format that is not already provided, please open an issue or pull request with a clear, broadly useful use case. This helps ensure new endpoints benefit the wider community rather than relying on raw database queries.

+
+
+

Authentication

+

All /dbquery/* endpoints require an API token in the HTTP headers:

+
Authorization: Bearer <API_TOKEN>
+
+

If the token is missing or invalid:

+
{ "error": "Forbidden" }
+
+
+

Endpoints

+

1. POST /dbquery/read

+

Execute a read-only SQL query (e.g., SELECT).

+

Request Body

+
{
+  "rawSql": "U0VMRUNUICogRlJPTSBERVZJQ0VT"   // base64 encoded SQL
+}
+
+

Decoded SQL:

+
SELECT * FROM Devices;
+
+

Response

+
{
+  "success": true,
+  "results": [
+    { "devMac": "AA:BB:CC:DD:EE:FF", "devName": "Phone" }
+  ]
+}
+
+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/read" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json" \
+  -H "Content-Type: application/json" \
+  -d '{
+    "rawSql": "U0VMRUNUICogRlJPTSBERVZJQ0VT"
+  }'
+
+
+

2. POST /dbquery/update (safer than /dbquery/write)

+

Update rows in a table by columnName + id. /dbquery/update is parameterized to reduce the risk of SQL injection, while /dbquery/write executes raw SQL directly.

+

Request Body

+
{
+  "columnName": "devMac",
+  "id": ["AA:BB:CC:DD:EE:FF"],
+  "dbtable": "Devices",
+  "columns": ["devName", "devOwner"],
+  "values": ["Laptop", "Alice"]
+}
+
+

Response

+
{ "success": true, "updated_count": 1 }
+
+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/update" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json" \
+  -H "Content-Type: application/json" \
+  -d '{
+    "columnName": "devMac",
+    "id": ["AA:BB:CC:DD:EE:FF"],
+    "dbtable": "Devices",
+    "columns": ["devName", "devOwner"],
+    "values": ["Laptop", "Alice"]
+  }'
+
+
+

3. POST /dbquery/write

+

Execute a write query (INSERT, UPDATE, DELETE).

+

Request Body

+
{
+  "rawSql": "SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk="
+}
+
+

Decoded SQL:

+
INSERT INTO Devices (devMac, devName, devFirstConnection, devLastConnection, devLastIP)
+VALUES ('6A:BB:4C:5D:6E', 'TestDevice', '2025-08-30 12:00:00', '2025-08-30 12:00:00', '10.0.0.10');
+
+

Response

+
{ "success": true, "affected_rows": 1 }
+
+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/write" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json" \
+  -H "Content-Type: application/json" \
+  -d '{
+    "rawSql": "SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk="
+  }'
+
+
+

4. POST /dbquery/delete

+

Delete rows in a table by columnName + id.

+

Request Body

+
{
+  "columnName": "devMac",
+  "id": ["AA:BB:CC:DD:EE:FF"],
+  "dbtable": "Devices"
+}
+
+

Response

+
{ "success": true, "deleted_count": 1 }
+
+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/delete" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json" \
+  -H "Content-Type: application/json" \
+  -d '{
+    "columnName": "devMac",
+    "id": ["AA:BB:CC:DD:EE:FF"],
+    "dbtable": "Devices"
+  }'
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_DEVICE/index.html b/API_DEVICE/index.html new file mode 100644 index 00000000..a0fd0a7b --- /dev/null +++ b/API_DEVICE/index.html @@ -0,0 +1,4410 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Device - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Device API Endpoints

+

Manage a single device by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require authorization via Bearer token.

+
+

1. Retrieve Device Details

+
    +
  • +

    GET /device/<mac> + Fetch all details for a single device, including:

    +
  • +
  • +

    Computed status (devStatus) → On-line, Off-line, or Down

    +
  • +
  • Session and event counts (devSessions, devEvents, devDownAlerts)
  • +
  • Presence hours (devPresenceHours)
  • +
  • Children devices (devChildrenDynamic) and NIC children (devChildrenNicsDynamic)
  • +
+

Special case: mac=new returns a template for a new device with default values.

+

Response (success):

+
{
+  "devMac": "AA:BB:CC:DD:EE:FF",
+  "devName": "Net - Huawei",
+  "devOwner": "Admin",
+  "devType": "Router",
+  "devVendor": "Huawei",
+  "devStatus": "On-line",
+  "devSessions": 12,
+  "devEvents": 5,
+  "devDownAlerts": 1,
+  "devPresenceHours": 32,
+  "devChildrenDynamic": [...],
+  "devChildrenNicsDynamic": [...],
+  ...
+}
+
+

Error Responses:

+
    +
  • Device not found → HTTP 404
  • +
  • Unauthorized → HTTP 403
  • +
+
+

2. Update Device Fields

+
    +
  • POST /device/<mac> + Create or update a device record.
  • +
+

Request Body:

+
{
+  "devName": "New Device",
+  "devOwner": "Admin",
+  "createNew": true
+}
+
+

Behavior:

+
    +
  • If createNew=true → creates a new device
  • +
  • Otherwise → updates existing device fields
  • +
+

Response:

+
{
+  "success": true
+}
+
+

Error Responses:

+
    +
  • Unauthorized → HTTP 403
  • +
+
+

3. Delete a Device

+
    +
  • DELETE /device/<mac>/delete + Deletes the device with the given MAC.
  • +
+

Response:

+
{
+  "success": true
+}
+
+

Error Responses:

+
    +
  • Unauthorized → HTTP 403
  • +
+
+

4. Delete All Events for a Device

+
    +
  • DELETE /device/<mac>/events/delete + Removes all events associated with a device.
  • +
+

Response:

+
{
+  "success": true
+}
+
+
+

5. Reset Device Properties

+
    +
  • POST /device/<mac>/reset-props + Resets the device's custom properties to default values.
  • +
+

Request Body: Optional JSON for additional parameters.

+

Response:

+
{
+  "success": true
+}
+
+
+

6. Copy Device Data

+
    +
  • POST /device/copy + Copy all data from one device to another. If a device exists with macTo, it is replaced.
  • +
+

Request Body:

+
{
+  "macFrom": "AA:BB:CC:DD:EE:FF",
+  "macTo": "11:22:33:44:55:66"
+}
+
+

Response:

+
{
+  "success": true,
+  "message": "Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66"
+}
+
+

Error Responses:

+
    +
  • Missing macFrom or macTo → HTTP 400
  • +
  • Unauthorized → HTTP 403
  • +
+
+

7. Update a Single Column

+
    +
  • POST /device/<mac>/update-column + Update one specific column for a device.
  • +
+

Request Body:

+
{
+  "columnName": "devName",
+  "columnValue": "Updated Device Name"
+}
+
+

Response (success):

+
{
+  "success": true
+}
+
+

Error Responses:

+
    +
  • Device not found → HTTP 404
  • +
  • Missing columnName or columnValue → HTTP 400
  • +
  • Unauthorized → HTTP 403
  • +
+
+

Example curl Requests

+

Get Device Details:

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+

Update Device Fields:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"devName": "New Device Name"}'
+
+

Delete Device:

+
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/delete" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+

Copy Device Data:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/copy" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"macFrom":"AA:BB:CC:DD:EE:FF","macTo":"11:22:33:44:55:66"}'
+
+

Update Single Column:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/update-column" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"columnName":"devName","columnValue":"Updated Device"}'
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_DEVICES/index.html b/API_DEVICES/index.html new file mode 100644 index 00000000..a8920ee5 --- /dev/null +++ b/API_DEVICES/index.html @@ -0,0 +1,4477 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Devices Collection - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Devices Collection API Endpoints

+

The Devices Collection API provides operations to retrieve, manage, import/export, and filter devices in bulk. All endpoints require authorization via Bearer token.

+
+

Endpoints

+

1. Get All Devices

+
    +
  • GET /devices + Retrieves all devices from the database.
  • +
+

Response (success):

+
{
+  "success": true,
+  "devices": [
+    {
+      "devName": "Net - Huawei",
+      "devMAC": "AA:BB:CC:DD:EE:FF",
+      "devIP": "192.168.1.1",
+      "devType": "Router",
+      "devFavorite": 0,
+      "devStatus": "online"
+    },
+    ...
+  ]
+}
+
+

Error Responses:

+
    +
  • Unauthorized → HTTP 403
  • +
+
+

2. Delete Devices by MAC

+
    +
  • DELETE /devices + Deletes devices by MAC address. Supports exact matches or wildcard *.
  • +
+

Request Body:

+
{
+  "macs": ["AA:BB:CC:DD:EE:FF", "11:22:33:*"]
+}
+
+

Behavior:

+
    +
  • If macs is omitted or null → deletes all devices.
  • +
  • Wildcards * match multiple devices.
  • +
+

Response:

+
{
+  "success": true,
+  "deleted_count": 5
+}
+
+

Error Responses:

+
    +
  • Unauthorized → HTTP 403
  • +
+
+

3. Delete Devices with Empty MACs

+
    +
  • DELETE /devices/empty-macs + Removes all devices where MAC address is null or empty.
  • +
+

Response:

+
{
+  "success": true,
+  "deleted": 3
+}
+
+
+

4. Delete Unknown Devices

+
    +
  • DELETE /devices/unknown + Deletes devices with names marked as (unknown) or (name not found).
  • +
+

Response:

+
{
+  "success": true,
+  "deleted": 2
+}
+
+
+

5. Export Devices

+
    +
  • GET /devices/export or /devices/export/<format> + Exports all devices in CSV (default) or JSON format.
  • +
+

Query Parameter / URL Parameter:

+
    +
  • format (optional) → csv (default) or json
  • +
+

CSV Response:

+
    +
  • Returns as a downloadable CSV file: Content-Disposition: attachment; filename=devices.csv
  • +
+

JSON Response:

+
{
+  "data": [
+    { "devName": "Net - Huawei", "devMAC": "AA:BB:CC:DD:EE:FF", ... },
+    ...
+  ],
+  "columns": ["devName", "devMAC", "devIP", "devType", "devFavorite", "devStatus"]
+}
+
+

Error Responses:

+
    +
  • Unsupported format → HTTP 400
  • +
+
+

6. Import Devices from CSV

+
    +
  • POST /devices/import + Imports devices from an uploaded CSV or base64-encoded CSV content.
  • +
+

Request Body (multipart file or JSON with content field):

+
{
+  "content": "<base64-encoded CSV content>"
+}
+
+

Response:

+
{
+  "success": true,
+  "inserted": 25,
+  "skipped_lines": [3, 7]
+}
+
+

Error Responses:

+
    +
  • Missing file or content → HTTP 400 / 404
  • +
  • CSV malformed → HTTP 400
  • +
+
+

7. Get Device Totals

+
    +
  • GET /devices/totals + Returns counts of devices by various categories.
  • +
+

Response:

+
[ 
+  120,    // Total devices
+  85,     // Connected
+  5,      // Favorites
+  10,     // New
+  8,      // Down
+  12      // Archived
+]
+
+

Order: [all, connected, favorites, new, down, archived]

+
+

8. Get Devices by Status

+
    +
  • GET /devices/by-status?status=<status> + Returns devices filtered by status.
  • +
+

Query Parameter:

+
    +
  • status → Supported values: online, offline, down, archived, favorites, new, my
  • +
  • If omitted, returns all devices.
  • +
+

Response (success):

+
[
+  { "id": "AA:BB:CC:DD:EE:FF", "title": "Net - Huawei", "favorite": 0 },
+  { "id": "11:22:33:44:55:66", "title": "★ USG Firewall", "favorite": 1 }
+]
+
+

If devFavorite=1, the title is prepended with a star .

+
+

Example curl Requests

+

Get All Devices:

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+

Delete Devices by MAC:

+
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/devices" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"macs":["AA:BB:CC:DD:EE:FF","11:22:33:*"]}'
+
+

Export Devices CSV:

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices/export?format=csv" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+

Import Devices from CSV:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/devices/import" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -F "file=@devices.csv"
+
+

Get Devices by Status:

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices/by-status?status=online" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_EVENTS/index.html b/API_EVENTS/index.html new file mode 100644 index 00000000..09df4e3f --- /dev/null +++ b/API_EVENTS/index.html @@ -0,0 +1,4368 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Events - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+ +
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Events API Endpoints

+

The Events API provides access to device event logs, allowing creation, retrieval, deletion, and summary of events over time.

+
+

Endpoints

+

1. Create Event

+
    +
  • POST /events/create/<mac> + Create an event for a device identified by its MAC address.
  • +
+

Request Body (JSON):

+
{
+  "ip": "192.168.1.10",
+  "event_type": "Device Down",
+  "additional_info": "Optional info about the event",
+  "pending_alert": 1,
+  "event_time": "2025-08-24T12:00:00Z"
+}
+
+
    +
  • +

    Parameters:

    +
  • +
  • +

    ip (string, optional): IP address of the device

    +
  • +
  • event_type (string, optional): Type of event (default "Device Down")
  • +
  • additional_info (string, optional): Extra information
  • +
  • pending_alert (int, optional): 1 if alert email is pending (default 1)
  • +
  • event_time (ISO datetime, optional): Event timestamp; defaults to current time
  • +
+

Response (JSON):

+
{
+  "success": true,
+  "message": "Event created for 00:11:22:33:44:55"
+}
+
+
+

2. Get Events

+
    +
  • GET /events + Retrieve all events, optionally filtered by MAC address:
  • +
+
/events?mac=<mac>
+
+

Response:

+
{
+  "success": true,
+  "events": [
+    {
+      "eve_MAC": "00:11:22:33:44:55",
+      "eve_IP": "192.168.1.10",
+      "eve_DateTime": "2025-08-24T12:00:00Z",
+      "eve_EventType": "Device Down",
+      "eve_AdditionalInfo": "",
+      "eve_PendingAlertEmail": 1
+    }
+  ]
+}
+
+
+

3. Delete Events

+
    +
  • DELETE /events/<mac> → Delete events for a specific MAC
  • +
  • DELETE /events → Delete all events
  • +
  • DELETE /events/<days> → Delete events older than N days
  • +
+

Response:

+
{
+  "success": true,
+  "message": "Deleted events older than <days> days"
+}
+
+
+

4. Event Totals Over a Period

+
    +
  • GET /sessions/totals?period=<period> + Return event and session totals over a given period.
  • +
+

Query Parameters:

+ + + + + + + + + + + + + +
ParameterDescription
periodTime period for totals, e.g., "7 days", "1 month", "1 year", "100 years"
+

Sample Response (JSON Array):

+
[120, 85, 5, 10, 3, 7]
+
+

Meaning of Values:

+
    +
  1. Total events in the period
  2. +
  3. Total sessions
  4. +
  5. Missing sessions
  6. +
  7. Voided events (eve_EventType LIKE 'VOIDED%')
  8. +
  9. New device events (eve_EventType LIKE 'New Device')
  10. +
  11. Device down events (eve_EventType LIKE 'Device Down')
  12. +
+
+

Notes

+
    +
  • All endpoints require authorization (Bearer token). Unauthorized requests return:
  • +
+
{ "error": "Forbidden" }
+
+
    +
  • +

    Events are stored in the Events table with the following fields: + eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail.

    +
  • +
  • +

    Event creation automatically logs activity for debugging.

    +
  • +
+
+

Example curl Requests

+

Create Event:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/events/create/00:11:22:33:44:55" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{
+    "ip": "192.168.1.10",
+    "event_type": "Device Down",
+    "additional_info": "Power outage",
+    "pending_alert": 1
+  }'
+
+

Get Events for a Device:

+
curl "http://<server_ip>:<GRAPHQL_PORT>/events?mac=00:11:22:33:44:55" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+

Delete Events Older Than 30 Days:

+
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/events/30" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+

Get Event Totals for 7 Days:

+
curl "http://<server_ip>:<GRAPHQL_PORT>/sessions/totals?period=7 days" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_GRAPHQL/index.html b/API_GRAPHQL/index.html new file mode 100644 index 00000000..9531eb7d --- /dev/null +++ b/API_GRAPHQL/index.html @@ -0,0 +1,4777 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + GraphQL - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

GraphQL API Endpoint

+

GraphQL queries are read-optimized for speed. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:

+
    +
  • Devices
  • +
  • Settings
  • +
  • Language Strings (LangStrings)
  • +
+

Endpoints

+
    +
  • +

    GET /graphql + Returns a simple status message (useful for browser or debugging).

    +
  • +
  • +

    POST /graphql + Execute GraphQL queries against the devicesSchema.

    +
  • +
+
+

Devices Query

+

Sample Query

+
query GetDevices($options: PageQueryOptionsInput) {
+  devices(options: $options) {
+    devices {
+      rowid
+      devMac
+      devName
+      devOwner
+      devType
+      devVendor
+      devLastConnection
+      devStatus
+    }
+    count
+  }
+}
+
+

Query Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescription
pagePage number of results to fetch.
limitNumber of results per page.
sortSorting options (field = field name, order = asc or desc).
searchTerm to filter devices.
statusFilter devices by status: my_devices, connected, favorites, new, down, archived, offline.
filtersAdditional filters (array of { filterColumn, filterValue }).
+
+

curl Example

+
curl 'http://host:GRAPHQL_PORT/graphql' \
+  -X POST \
+  -H 'Authorization: Bearer API_TOKEN' \
+  -H 'Content-Type: application/json' \
+  --data '{
+    "query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
+    "variables": {
+      "options": {
+        "page": 1,
+        "limit": 10,
+        "sort": [{ "field": "devName", "order": "asc" }],
+        "search": "",
+        "status": "connected"
+      }
+    }
+  }'
+
+
+

Sample Response

+
{
+  "data": {
+    "devices": {
+      "devices": [
+        {
+          "rowid": 1,
+          "devMac": "00:11:22:33:44:55",
+          "devName": "Device 1",
+          "devOwner": "Owner 1",
+          "devType": "Type 1",
+          "devVendor": "Vendor 1",
+          "devLastConnection": "2025-01-01T00:00:00Z",
+          "devStatus": "connected"
+        }
+      ],
+      "count": 1
+    }
+  }
+}
+
+
+

Settings Query

+

The settings query provides access to NetAlertX configuration stored in the settings table.

+

Sample Query

+
query GetSettings {
+  settings {
+    settings {
+      setKey
+      setName
+      setDescription
+      setType
+      setOptions
+      setGroup
+      setValue
+      setEvents
+      setOverriddenByEnv
+    }
+    count
+  }
+}
+
+

Schema Fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
setKeyStringUnique key identifier for the setting.
setNameStringHuman-readable name.
setDescriptionStringDescription or documentation of the setting.
setTypeStringData type (string, int, bool, json, etc.).
setOptionsStringAvailable options (for dropdown/select-type settings).
setGroupStringGroup/category the setting belongs to.
setValueStringCurrent value of the setting.
setEventsStringEvents or triggers related to this setting.
setOverriddenByEnvBooleanWhether the setting is overridden by an environment variable at runtime.
+
+

curl Example

+
curl 'http://host:GRAPHQL_PORT/graphql' \
+  -X POST \
+  -H 'Authorization: Bearer API_TOKEN' \
+  -H 'Content-Type: application/json' \
+  --data '{
+    "query": "query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }"
+  }'
+
+
+

Sample Response

+
{
+  "data": {
+    "settings": {
+      "settings": [
+        {
+          "setKey": "UI_MY_DEVICES",
+          "setName": "My Devices Filter",
+          "setDescription": "Defines which statuses to include in the 'My Devices' view.",
+          "setType": "list",
+          "setOptions": "[\"online\",\"new\",\"down\",\"offline\",\"archived\"]",
+          "setGroup": "UI",
+          "setValue": "[\"online\",\"new\"]",
+          "setEvents": null,
+          "setOverriddenByEnv": false
+        },
+        {
+          "setKey": "NETWORK_DEVICE_TYPES",
+          "setName": "Network Device Types",
+          "setDescription": "Types of devices considered as network infrastructure.",
+          "setType": "list",
+          "setOptions": "[\"Router\",\"Switch\",\"AP\"]",
+          "setGroup": "Network",
+          "setValue": "[\"Router\",\"Switch\"]",
+          "setEvents": null,
+          "setOverriddenByEnv": true
+        }
+      ],
+      "count": 2
+    }
+  }
+}
+
+
+

LangStrings Query

+

The LangStrings query provides access to localized strings. Supports filtering by langCode and langStringKey. If the requested string is missing or empty, you can optionally fallback to en_us.

+

Sample Query

+
query GetLangStrings {
+  langStrings(langCode: "de_de", langStringKey: "settings_other_scanners") {
+    langStrings {
+      langCode
+      langStringKey
+      langStringText
+    }
+    count
+  }
+}
+
+

Query Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterTypeDescription
langCodeStringOptional language code (e.g., en_us, de_de). If omitted, all languages are returned.
langStringKeyStringOptional string key to retrieve a specific entry.
fallback_to_enBooleanOptional (default true). If true, empty or missing strings fallback to en_us.
+

curl Example

+
curl 'http://host:GRAPHQL_PORT/graphql' \
+  -X POST \
+  -H 'Authorization: Bearer API_TOKEN' \
+  -H 'Content-Type: application/json' \
+  --data '{
+    "query": "query GetLangStrings { langStrings(langCode: \"de_de\", langStringKey: \"settings_other_scanners\") { langStrings { langCode langStringKey langStringText } count } }"
+  }'
+
+

Sample Response

+
{
+  "data": {
+    "langStrings": {
+      "count": 1,
+      "langStrings": [
+        {
+          "langCode": "de_de",
+          "langStringKey": "settings_other_scanners",
+          "langStringText": "Other, non-device scanner plugins that are currently enabled."  // falls back to en_us if empty
+        }
+      ]
+    }
+  }
+}
+
+
+

Notes

+
    +
  • Device, settings, and LangStrings queries can be combined in one request since GraphQL supports batching.
  • +
  • The fallback_to_en feature ensures UI always has a value even if a translation is missing.
  • +
  • Data is cached in memory per JSON file; changes to language or plugin files will only refresh after the cache detects a file modification.
  • +
  • The setOverriddenByEnv flag helps identify setting values that are locked at container runtime.
  • +
  • The schema is read-only — updates must be performed through other APIs or configuration management. See the other API endpoints for details.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_LOGS/index.html b/API_LOGS/index.html new file mode 100644 index 00000000..77bbd88b --- /dev/null +++ b/API_LOGS/index.html @@ -0,0 +1,4184 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Logs API Endpoints - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Logs API Endpoints

+

Manage or purge application log files stored under /app/log and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.

+

Only specific, pre-approved log files can be purged for security and stability reasons.

+
+

Delete (Purge) a Log File

+
    +
  • DELETE /logs?file=<log_file> → Purge the contents of an allowed log file.
  • +
+

Query Parameter:

+
    +
  • file → The name of the log file to purge (e.g., app.log, stdout.log)
  • +
+

Allowed Files:

+
app.log
+app_front.log
+IP_changes.log
+stdout.log
+stderr.log
+app.php_errors.log
+execution_queue.log
+db_is_locked.log
+
+

Authorization: +Requires a valid API token in the Authorization header.

+
+

curl Example (Success)

+
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Accept: application/json'
+
+

Response:

+
{
+  "success": true,
+  "message": "[clean_log] File app.log purged successfully"
+}
+
+
+

curl Example (Not Allowed)

+
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=not_allowed.log' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Accept: application/json'
+
+

Response:

+
{
+  "success": false,
+  "message": "[clean_log] File not_allowed.log is not allowed to be purged"
+}
+
+
+

curl Example (Unauthorized)

+
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \
+  -H 'Accept: application/json'
+
+

Response:

+
{
+  "error": "Forbidden"
+}
+
+
+

Add an Action to the Execution Queue

+
    +
  • POST /logs/add-to-execution-queue → Add a system action to the execution queue.
  • +
+

Request Body (JSON):

+
{
+  "action": "update_api|devices"
+}
+
+

Authorization: +Requires a valid API token in the Authorization header.

+
+

curl Example (Success)

+

The below will update the API cache for Devices

+
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Content-Type: application/json' \
+  --data '{"action": "update_api|devices"}'
+
+

Response:

+
{
+  "success": true,
+  "message": "[UserEventsQueueInstance] Action \"update_api|devices\" added to the execution queue."
+}
+
+
+

curl Example (Missing Parameter)

+
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Content-Type: application/json' \
+  --data '{}'
+
+

Response:

+
{
+  "success": false,
+  "message": "Missing parameters",
+  "error": "Missing required 'action' field in JSON body"
+}
+
+
+

curl Example (Unauthorized)

+
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
+  -H 'Content-Type: application/json' \
+  --data '{"action": "update_api|devices"}'
+
+

Response:

+
{
+  "error": "Forbidden"
+}
+
+
+

Notes

+
    +
  • Only predefined files in /app/log can be purged — arbitrary paths are not permitted.
  • +
  • +

    When a log file is purged:

    +
  • +
  • +

    Its content is replaced with a short marker text: "File manually purged".

    +
  • +
  • A backend log entry is created via mylog().
  • +
  • A frontend notification is generated via write_notification().
  • +
  • Execution queue actions are appended to execution_queue.log and can be processed asynchronously by background tasks or workflows.
  • +
  • Unauthorized or invalid attempts are safely logged and rejected.
  • +
  • For advanced log retrieval, analysis, or structured querying, use the frontend log viewer.
  • +
  • Always ensure that sensitive or production logs are handled carefully — purging cannot be undone.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_MESSAGING_IN_APP/index.html b/API_MESSAGING_IN_APP/index.html new file mode 100644 index 00000000..1f57cbfb --- /dev/null +++ b/API_MESSAGING_IN_APP/index.html @@ -0,0 +1,4513 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Messaging in-app - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

In-app Notifications API

+

Manage in-app notifications for users. Notifications can be written, retrieved, marked as read, or deleted.

+
+

Write Notification

+
    +
  • POST /messaging/in-app/write → Create a new in-app notification.
  • +
+

Request Body:

+

json + { + "content": "This is a test notification", + "level": "alert" // optional, ["interrupt","info","alert"] default: "alert" + }

+

Response:

+

json + { + "success": true + }

+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/write" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json" \
+  -H "Content-Type: application/json" \
+  -d '{
+    "content": "This is a test notification",
+    "level": "alert"
+  }'
+
+
+

Get Unread Notifications

+
    +
  • GET /messaging/in-app/unread → Retrieve all unread notifications.
  • +
+

Response:

+

json + [ + { + "timestamp": "2025-10-10T12:34:56", + "guid": "f47ac10b-58cc-4372-a567-0e02b2c3d479", + "read": 0, + "level": "alert", + "content": "This is a test notification" + } + ]

+

curl Example

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/unread" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+
+

Mark All Notifications as Read

+
    +
  • POST /messaging/in-app/read/all → Mark all notifications as read.
  • +
+

Response:

+

json + { + "success": true + }

+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/all" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+
+

Mark Single Notification as Read

+
    +
  • POST /messaging/in-app/read/<guid> → Mark a single notification as read using its GUID.
  • +
+

Response (success):

+

json + { + "success": true + }

+

Response (failure):

+

json + { + "success": false, + "error": "Notification not found" + }

+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+
+

Delete All Notifications

+
    +
  • DELETE /messaging/in-app/delete → Remove all notifications from the system.
  • +
+

Response:

+

json + { + "success": true + }

+

curl Example

+
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+
+

Delete Single Notification

+
    +
  • DELETE /messaging/in-app/delete/<guid> → Remove a single notification by its GUID.
  • +
+

Response (success):

+

json + { + "success": true + }

+

Response (failure):

+

json + { + "success": false, + "error": "Notification not found" + }

+

curl Example

+
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_METRICS/index.html b/API_METRICS/index.html new file mode 100644 index 00000000..c27dd24f --- /dev/null +++ b/API_METRICS/index.html @@ -0,0 +1,4352 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Metrics - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Metrics API Endpoint

+

The /metrics endpoint exposes Prometheus-compatible metrics for NetAlertX, including aggregate device counts and per-device status.

+
+

Endpoint Details

+
    +
  • GET /metrics → Returns metrics in plain text.
  • +
  • Host: NetAlertX server
  • +
  • Port: As configured in GRAPHQL_PORT (default: 20212)
  • +
+
+

Example Output

+
netalertx_connected_devices 31
+netalertx_offline_devices 54
+netalertx_down_devices 0
+netalertx_new_devices 0
+netalertx_archived_devices 31
+netalertx_favorite_devices 2
+netalertx_my_devices 54
+
+netalertx_device_status{device="Net - Huawei", mac="Internet", ip="1111.111.111.111", vendor="None", first_connection="2021-01-01 00:00:00", last_connection="2025-08-04 17:57:00", dev_type="Router", device_status="Online"} 1
+netalertx_device_status{device="Net - USG", mac="74:ac:74:ac:74:ac", ip="192.168.1.1", vendor="Ubiquiti Networks Inc.", first_connection="2022-02-12 22:05:00", last_connection="2025-06-07 08:16:49", dev_type="Firewall", device_status="Archived"} 1
+netalertx_device_status{device="Raspberry Pi 4 LAN", mac="74:ac:74:ac:74:74", ip="192.168.1.9", vendor="Raspberry Pi Trading Ltd", first_connection="2022-02-12 22:05:00", last_connection="2025-08-04 17:57:00", dev_type="Singleboard Computer (SBC)", device_status="Online"} 1
+...
+
+
+

Metrics Overview

+

1. Aggregate Device Counts

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MetricDescription
netalertx_connected_devicesDevices currently connected
netalertx_offline_devicesDevices currently offline
netalertx_down_devicesDown/unreachable devices
netalertx_new_devicesRecently detected devices
netalertx_archived_devicesArchived devices
netalertx_favorite_devicesUser-marked favorites
netalertx_my_devicesDevices associated with the current user
+
+

2. Per-Device Status

+

Metric: netalertx_device_status +Each device has labels:

+
    +
  • device: friendly name
  • +
  • mac: MAC address (or placeholder)
  • +
  • ip: last recorded IP
  • +
  • vendor: manufacturer or "None"
  • +
  • first_connection: timestamp of first detection
  • +
  • last_connection: most recent contact
  • +
  • dev_type: device type/category
  • +
  • device_status: current status (Online, Offline, Archived, Down, …)
  • +
+

Metric value is always 1 (presence indicator).

+
+

Querying with curl

+
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Accept: text/plain'
+
+

Replace placeholders:

+
    +
  • <server_ip> – NetAlertX host IP/hostname
  • +
  • <GRAPHQL_PORT> – configured port (default 20212)
  • +
  • <API_TOKEN> – your API token
  • +
+
+

Prometheus Scraping Configuration

+
scrape_configs:
+  - job_name: 'netalertx'
+    metrics_path: /metrics
+    scheme: http
+    scrape_interval: 60s
+    static_configs:
+      - targets: ['<server_ip>:<GRAPHQL_PORT>']
+    authorization:
+      type: Bearer
+      credentials: <API_TOKEN>
+
+
+

Grafana Dashboard Template

+

Sample template JSON: Download

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_NETTOOLS/index.html b/API_NETTOOLS/index.html new file mode 100644 index 00000000..f3f08374 --- /dev/null +++ b/API_NETTOOLS/index.html @@ -0,0 +1,4446 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Net Tools - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Net Tools API Endpoints

+

The Net Tools API provides network diagnostic utilities, including Wake-on-LAN, traceroute, speed testing, DNS resolution, nmap scanning, and internet connection information.

+

All endpoints require authorization via Bearer token.

+
+

Endpoints

+

1. Wake-on-LAN

+
    +
  • POST /nettools/wakeonlan + Sends a Wake-on-LAN packet to wake a device.
  • +
+

Request Body (JSON):

+
{
+  "devMac": "AA:BB:CC:DD:EE:FF"
+}
+
+

Response (success):

+
{
+  "success": true,
+  "message": "WOL packet sent",
+  "output": "Sent magic packet to AA:BB:CC:DD:EE:FF"
+}
+
+

Error Responses:

+
    +
  • Invalid MAC address → HTTP 400
  • +
  • Command failure → HTTP 500
  • +
+
+

2. Traceroute

+
    +
  • POST /nettools/traceroute + Performs a traceroute to a specified IP address.
  • +
+

Request Body:

+
{
+  "devLastIP": "192.168.1.1"
+}
+
+

Response (success):

+
{
+  "success": true,
+  "output": "traceroute output as string"
+}
+
+

Error Responses:

+
    +
  • Invalid IP → HTTP 400
  • +
  • Traceroute command failure → HTTP 500
  • +
+
+

3. Speedtest

+
    +
  • GET /nettools/speedtest + Runs an internet speed test using speedtest-cli.
  • +
+

Response (success):

+
{
+  "success": true,
+  "output": [
+    "Ping: 15 ms",
+    "Download: 120.5 Mbit/s",
+    "Upload: 22.4 Mbit/s"
+  ]
+}
+
+

Error Responses:

+
    +
  • Command failure → HTTP 500
  • +
+
+

4. DNS Lookup (nslookup)

+
    +
  • POST /nettools/nslookup + Resolves an IP address or hostname using nslookup.
  • +
+

Request Body:

+
{
+  "devLastIP": "8.8.8.8"
+}
+
+

Response (success):

+
{
+  "success": true,
+  "output": [
+    "Server: 8.8.8.8",
+    "Address: 8.8.8.8#53",
+    "Name: google-public-dns-a.google.com"
+  ]
+}
+
+

Error Responses:

+
    +
  • Missing or invalid devLastIP → HTTP 400
  • +
  • Command failure → HTTP 500
  • +
+
+

5. Nmap Scan

+
    +
  • POST /nettools/nmap + Runs an nmap scan on a target IP address or range.
  • +
+

Request Body:

+
{
+  "scan": "192.168.1.0/24",
+  "mode": "fast"
+}
+
+

Supported Modes:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Modenmap Arguments
fast-F
normaldefault
detail-A
skipdiscovery-Pn
+

Response (success):

+
{
+  "success": true,
+  "mode": "fast",
+  "ip": "192.168.1.0/24",
+  "output": [
+    "Starting Nmap 7.91",
+    "Host 192.168.1.1 is up",
+    "... scan results ..."
+  ]
+}
+
+

Error Responses:

+
    +
  • Invalid IP → HTTP 400
  • +
  • Invalid mode → HTTP 400
  • +
  • Command failure → HTTP 500
  • +
+
+

6. Internet Connection Info

+
    +
  • GET /nettools/internetinfo + Fetches public internet connection information using ipinfo.io.
  • +
+

Response (success):

+
{
+  "success": true,
+  "output": "IP: 203.0.113.5 City: Sydney Country: AU Org: Example ISP"
+}
+
+

Error Responses:

+
    +
  • Failed request or empty response → HTTP 500
  • +
+
+

Example curl Requests

+

Wake-on-LAN:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/wakeonlan" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"devMac":"AA:BB:CC:DD:EE:FF"}'
+
+

Traceroute:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/traceroute" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"devLastIP":"192.168.1.1"}'
+
+

Speedtest:

+
curl "http://<server_ip>:<GRAPHQL_PORT>/nettools/speedtest" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+

Nslookup:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/nslookup" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"devLastIP":"8.8.8.8"}'
+
+

Nmap Scan:

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/nmap" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Content-Type: application/json" \
+  --data '{"scan":"192.168.1.0/24","mode":"fast"}'
+
+

Internet Info:

+
curl "http://<server_ip>:<GRAPHQL_PORT>/nettools/internetinfo" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_OLD/index.html b/API_OLD/index.html new file mode 100644 index 00000000..1f1ba77f --- /dev/null +++ b/API_OLD/index.html @@ -0,0 +1,4979 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + SUPERSEDED OLD API Overview - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

[Deprecated] API endpoints

+
+

Warning

+

Some of these endpoints will be deprecated soon. Please refere to the new API endpoints docs for details on the new API layer.

+
+

NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the API_TOKEN settings as authorization bearer, for example:

+
curl 'http://host:GRAPHQL_PORT/graphql' \
+  -X POST \
+  -H 'Authorization: Bearer API_TOKEN' \
+  -H 'Content-Type: application/json' \
+  --data '{
+    "query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
+    "variables": {
+      "options": {
+        "page": 1,
+        "limit": 10,
+        "sort": [{ "field": "devName", "order": "asc" }],
+        "search": "",
+        "status": "connected"
+      }
+    }
+  }'
+
+

API Endpoint: GraphQL

+
    +
  • Endpoint URL: php/server/query_graphql.php
  • +
  • Host: same as front end (web ui)
  • +
  • Port: 20212 or as defined by the GRAPHQL_PORT setting
  • +
+

Example Query to Fetch Devices

+

First, let's define the GraphQL query to fetch devices with pagination and sorting options.

+
query GetDevices($options: PageQueryOptionsInput) {
+  devices(options: $options) {
+    devices {
+      rowid
+      devMac
+      devName
+      devOwner
+      devType
+      devVendor
+      devLastConnection
+      devStatus
+    }
+    count
+  }
+}
+
+

See also: Debugging GraphQL issues

+

curl Command

+

You can use the following curl command to execute the query.

+
curl 'http://host:GRAPHQL_PORT/graphql'   -X POST   -H 'Authorization: Bearer API_TOKEN'  -H 'Content-Type: application/json'   --data '{
+    "query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
+    "variables": {
+      "options": {
+        "page": 1,
+        "limit": 10,
+        "sort": [{ "field": "devName", "order": "asc" }],
+        "search": "",
+        "status": "connected"
+      }
+    }
+  }'
+
+

Explanation:

+
    +
  1. GraphQL Query:
  2. +
  3. The query parameter contains the GraphQL query as a string.
  4. +
  5. +

    The variables parameter contains the input variables for the query.

    +
  6. +
  7. +

    Query Variables:

    +
  8. +
  9. page: Specifies the page number of results to fetch.
  10. +
  11. limit: Specifies the number of results per page.
  12. +
  13. sort: Specifies the sorting options, with field being the field to sort by and order being the sort order (asc for ascending or desc for descending).
  14. +
  15. search: A search term to filter the devices.
  16. +
  17. +

    status: The status filter to apply (valid values are my_devices (determined by the UI_MY_DEVICES setting), connected, favorites, new, down, archived, offline).

    +
  18. +
  19. +

    curl Command:

    +
  20. +
  21. The -X POST option specifies that we are making a POST request.
  22. +
  23. The -H "Content-Type: application/json" option sets the content type of the request to JSON.
  24. +
  25. The -d option provides the request payload, which includes the GraphQL query and variables.
  26. +
+

Sample Response

+

The response will be in JSON format, similar to the following:

+
{
+  "data": {
+    "devices": {
+      "devices": [
+        {
+          "rowid": 1,
+          "devMac": "00:11:22:33:44:55",
+          "devName": "Device 1",
+          "devOwner": "Owner 1",
+          "devType": "Type 1",
+          "devVendor": "Vendor 1",
+          "devLastConnection": "2025-01-01T00:00:00Z",
+          "devStatus": "connected"
+        },
+        {
+          "rowid": 2,
+          "devMac": "66:77:88:99:AA:BB",
+          "devName": "Device 2",
+          "devOwner": "Owner 2",
+          "devType": "Type 2",
+          "devVendor": "Vendor 2",
+          "devLastConnection": "2025-01-02T00:00:00Z",
+          "devStatus": "connected"
+        }
+      ],
+      "count": 2
+    }
+  }
+}
+
+

API Endpoint: JSON files

+

This API endpoint retrieves static files, that are periodically updated.

+
    +
  • Endpoint URL: php/server/query_json.php?file=<file name>
  • +
  • Host: same as front end (web ui)
  • +
  • Port: 20211 or as defined by the $PORT docker environment variable (same as the port for the web ui)
  • +
+

When are the endpoints updated

+

The endpoints are updated when objects in the API endpoints are changed.

+

Location of the endpoints

+

In the container, these files are located under the API directory (default: /tmp/api/, configurable via NETALERTX_API environment variable). You can access them via the /php/server/query_json.php?file=user_notifications.json endpoint.

+

Available endpoints

+

You can access the following files:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
File nameDescription
notification_json_final.jsonThe json version of the last notification (e.g. used for webhooks - sample JSON).
table_devices.jsonAll of the available Devices detected by the app.
table_plugins_events.jsonThe list of the unprocessed (pending) notification events (plugins_events DB table).
table_plugins_history.jsonThe list of notification events history.
table_plugins_objects.jsonThe content of the plugins_objects table. Find more info on the Plugin system here
language_strings.jsonThe content of the language_strings table, which in turn is loaded from the plugins config.json definitions.
table_custom_endpoint.jsonA custom endpoint generated by the SQL query specified by the API_CUSTOM_SQL setting.
table_settings.jsonThe content of the settings table.
app_state.jsonContains the current application state.
+

JSON Data format

+

The endpoints starting with the table_ prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:

+
{
+  "data": [
+        {
+          "db_column_name": "data",
+          "db_column_name2": "data2"      
+        }, 
+        {
+          "db_column_name": "data3",
+          "db_column_name2": "data4" 
+        }
+    ]
+}
+
+
+

Example JSON of the table_devices.json endpoint with two Devices (database rows):

+
{
+  "data": [
+        {
+          "devMac": "Internet",
+          "devName": "Net - Huawei",
+          "devType": "Router",
+          "devVendor": null,
+          "devGroup": "Always on",
+          "devFirstConnection": "2021-01-01 00:00:00",
+          "devLastConnection": "2021-01-28 22:22:11",
+          "devLastIP": "192.168.1.24",
+          "devStaticIP": 0,
+          "devPresentLastScan": 1,
+          "devLastNotification": "2023-01-28 22:22:28.998715",
+          "devIsNew": 0,
+          "devParentMAC": "",
+          "devParentPort": "",
+          "devIcon": "globe"
+        }, 
+        {
+          "devMac": "a4:8f:ff:aa:ba:1f",
+          "devName": "Net - USG",
+          "devType": "Firewall",
+          "devVendor": "Ubiquiti Inc",
+          "devGroup": "",
+          "devFirstConnection": "2021-02-12 22:05:00",
+          "devLastConnection": "2021-07-17 15:40:00",
+          "devLastIP": "192.168.1.1",
+          "devStaticIP": 1,
+          "devPresentLastScan": 1,
+          "devLastNotification": "2021-07-17 15:40:10.667717",
+          "devIsNew": 0,
+          "devParentMAC": "Internet",
+          "devParentPort": 1,
+          "devIcon": "shield-halved"
+      }
+    ]
+}
+
+
+

API Endpoint: Prometheus Exporter

+
    +
  • Endpoint URL: /metrics
  • +
  • Host: (where NetAlertX exporter is running)
  • +
  • Port: as configured in the GRAPHQL_PORT setting (20212 by default)
  • +
+
+

Example Output of the /metrics Endpoint

+

Below is a representative snippet of the metrics you may find when querying the /metrics endpoint for netalertx. It includes both aggregate counters and device_status labels per device.

+
netalertx_connected_devices 31
+netalertx_offline_devices 54
+netalertx_down_devices 0
+netalertx_new_devices 0
+netalertx_archived_devices 31
+netalertx_favorite_devices 2
+netalertx_my_devices 54
+
+netalertx_device_status{device="Net - Huawei", mac="Internet", ip="1111.111.111.111", vendor="None", first_connection="2021-01-01 00:00:00", last_connection="2025-08-04 17:57:00", dev_type="Router", device_status="Online"} 1
+netalertx_device_status{device="Net - USG", mac="74:ac:74:ac:74:ac", ip="192.168.1.1", vendor="Ubiquiti Networks Inc.", first_connection="2022-02-12 22:05:00", last_connection="2025-06-07 08:16:49", dev_type="Firewall", device_status="Archived"} 1
+netalertx_device_status{device="Raspberry Pi 4 LAN", mac="74:ac:74:ac:74:74", ip="192.168.1.9", vendor="Raspberry Pi Trading Ltd", first_connection="2022-02-12 22:05:00", last_connection="2025-08-04 17:57:00", dev_type="Singleboard Computer (SBC)", device_status="Online"} 1
+...
+
+
+

Metrics Explanation

+

1. Aggregate Device Counts

+

Metric names prefixed with netalertx_ provide aggregated counts by device status:

+
    +
  • netalertx_connected_devices: number of devices currently connected
  • +
  • netalertx_offline_devices: devices currently offline
  • +
  • netalertx_down_devices: down/unreachable devices
  • +
  • netalertx_new_devices: devices recently detected
  • +
  • netalertx_archived_devices: archived devices
  • +
  • netalertx_favorite_devices: user-marked favorite devices
  • +
  • netalertx_my_devices: devices associated with the current user context
  • +
+

These numeric values give a high-level overview of device distribution.

+

2. Per‑Device Status with Labels

+

Each individual device is represented by a netalertx_device_status metric, with descriptive labels:

+
    +
  • device: friendly name of the device
  • +
  • mac: MAC address (or placeholder)
  • +
  • ip: last recorded IP address
  • +
  • vendor: manufacturer or "None" if unknown
  • +
  • first_connection: timestamp when the device was first observed
  • +
  • last_connection: most recent contact timestamp
  • +
  • dev_type: device category or type
  • +
  • device_status: current status (Online / Offline / Archived / Down / ...)
  • +
+

The metric value is always 1 (indicating presence or active state) and the combination of labels identifies the device.

+
+

How to Query with curl

+

To fetch the metrics from the NetAlertX exporter:

+
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Accept: text/plain'
+
+

Replace:

+
    +
  • <server_ip>: IP or hostname of the NetAlertX server
  • +
  • <GRAPHQL_PORT>: port specified in your GRAPHQL_PORT setting (default: 20212)
  • +
  • <API_TOKEN> your Bearer token from the API_TOKEN setting
  • +
+
+

Summary

+
    +
  • Endpoint: /metrics provides both summary counters and per-device status entries.
  • +
  • Aggregate metrics help monitor overall device states.
  • +
  • Detailed metrics expose each device’s metadata via labels.
  • +
  • Use case: feed into Prometheus for scraping, monitoring, alerting, or charting dashboard views.
  • +
+

Prometheus Scraping Configuration

+
scrape_configs:
+  - job_name: 'netalertx'
+    metrics_path: /metrics
+    scheme: http
+    scrape_interval: 60s
+    static_configs:
+      - targets: ['<server_ip>:<GRAPHQL_PORT>']
+    authorization:
+      type: Bearer
+      credentials: <API_TOKEN>
+
+

Grafana template

+

Grafana template sample: Download json

+

API Endpoint: /log files

+

This API endpoint retrieves files from the /tmp/log folder.

+
    +
  • Endpoint URL: php/server/query_logs.php?file=<file name>
  • +
  • Host: same as front end (web ui)
  • +
  • Port: 20211 or as defined by the $PORT docker environment variable (same as the port for the web ui)
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FileDescription
IP_changes.logLogs of IP address changes
app.logMain application log
app.php_errors.logPHP error log
app_front.logFrontend application log
app_nmap.logLogs of Nmap scan results
db_is_locked.logLogs when the database is locked
execution_queue.logLogs of execution queue activities
plugins/Directory for temporary plugin-related files (not accessible)
report_output.htmlHTML report output
report_output.jsonJSON format report output
report_output.txtText format report output
stderr.logLogs of standard error output
stdout.logLogs of standard output
+

API Endpoint: /config files

+

To retrieve files from the /data/config folder.

+
    +
  • Endpoint URL: php/server/query_config.php?file=<file name>
  • +
  • Host: same as front end (web ui)
  • +
  • Port: 20211 or as defined by the $PORT docker environment variable (same as the port for the web ui)
  • +
+ + + + + + + + + + + + + + + + + +
FileDescription
devices.csvDevices csv file
app.confApplication config file
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_ONLINEHISTORY/index.html b/API_ONLINEHISTORY/index.html new file mode 100644 index 00000000..19003ef0 --- /dev/null +++ b/API_ONLINEHISTORY/index.html @@ -0,0 +1,4135 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Online History - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Online History API Endpoints

+

Manage the online history records of devices. Currently, the API supports deletion of all history entries. All endpoints require authorization.

+
+

1. Delete Online History

+
    +
  • DELETE /history + Remove all records from the online history table (Online_History). This operation cannot be undone.
  • +
+

Response (success):

+
{
+  "success": true,
+  "message": "Deleted online history"
+}
+
+

Error Responses:

+
    +
  • Unauthorized → HTTP 403
  • +
+
+

Example curl Request

+
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/history" \
+  -H "Authorization: Bearer <API_TOKEN>"
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_SESSIONS/index.html b/API_SESSIONS/index.html new file mode 100644 index 00000000..aa74e64d --- /dev/null +++ b/API_SESSIONS/index.html @@ -0,0 +1,4572 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Sessions - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Sessions API Endpoints

+

Track and manage device connection sessions. Sessions record when a device connects or disconnects on the network.

+

Create a Session

+
    +
  • POST /sessions/create → Create a new session for a device
  • +
+

Request Body:

+

json + { + "mac": "AA:BB:CC:DD:EE:FF", + "ip": "192.168.1.10", + "start_time": "2025-08-01T10:00:00", + "end_time": "2025-08-01T12:00:00", // optional + "event_type_conn": "Connected", // optional, default "Connected" + "event_type_disc": "Disconnected" // optional, default "Disconnected" + }

+

Response:

+

json + { + "success": true, + "message": "Session created for MAC AA:BB:CC:DD:EE:FF" + }

+

curl Example

+
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/sessions/create" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json" \
+  -H "Content-Type: application/json" \
+  -d '{
+    "mac": "AA:BB:CC:DD:EE:FF",
+    "ip": "192.168.1.10",
+    "start_time": "2025-08-01T10:00:00",
+    "end_time": "2025-08-01T12:00:00",
+    "event_type_conn": "Connected",
+    "event_type_disc": "Disconnected"
+  }'
+
+
+
+

Delete Sessions

+
    +
  • DELETE /sessions/delete → Delete all sessions for a given MAC
  • +
+

Request Body:

+

json + { + "mac": "AA:BB:CC:DD:EE:FF" + }

+

Response:

+

json + { + "success": true, + "message": "Deleted sessions for MAC AA:BB:CC:DD:EE:FF" + }

+

curl Example

+
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/sessions/delete" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json" \
+  -H "Content-Type: application/json" \
+  -d '{
+    "mac": "AA:BB:CC:DD:EE:FF"
+  }'
+
+
+

List Sessions

+
    +
  • GET /sessions/list → Retrieve sessions optionally filtered by device and date range
  • +
+

Query Parameters:

+
    +
  • mac (optional) → Filter by device MAC address
  • +
  • start_date (optional) → Filter sessions starting from this date (YYYY-MM-DD)
  • +
  • end_date (optional) → Filter sessions ending by this date (YYYY-MM-DD)
  • +
+

Example:

+

/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21

+

Response:

+

json + { + "success": true, + "sessions": [ + { + "ses_MAC": "AA:BB:CC:DD:EE:FF", + "ses_Connection": "2025-08-01 10:00", + "ses_Disconnection": "2025-08-01 12:00", + "ses_Duration": "2h 0m", + "ses_IP": "192.168.1.10", + "ses_Info": "" + } + ] + }

+

curl Example

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+
+

Calendar View of Sessions

+
    +
  • GET /sessions/calendar → View sessions in calendar format
  • +
+

Query Parameters:

+
    +
  • start → Start date (YYYY-MM-DD)
  • +
  • end → End date (YYYY-MM-DD)
  • +
+

Example:

+

/sessions/calendar?start=2025-08-01&end=2025-08-21

+

Response:

+

json + { + "success": true, + "sessions": [ + { + "resourceId": "AA:BB:CC:DD:EE:FF", + "title": "", + "start": "2025-08-01T10:00:00", + "end": "2025-08-01T12:00:00", + "color": "#00a659", + "tooltip": "Connection: 2025-08-01 10:00\nDisconnection: 2025-08-01 12:00\nIP: 192.168.1.10", + "className": "no-border" + } + ] + }

+

curl Example

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/calendar?start=2025-08-01&end=2025-08-21" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+
+

Device Sessions

+
    +
  • GET /sessions/<mac> → Retrieve sessions for a specific device
  • +
+

Query Parameters:

+
    +
  • period → Period to retrieve sessions (1 day, 7 days, 1 month, etc.) + Default: 1 day
  • +
+

Example:

+

/sessions/AA:BB:CC:DD:EE:FF?period=7 days

+

Response:

+

json + { + "success": true, + "sessions": [ + { + "ses_MAC": "AA:BB:CC:DD:EE:FF", + "ses_Connection": "2025-08-01 10:00", + "ses_Disconnection": "2025-08-01 12:00", + "ses_Duration": "2h 0m", + "ses_IP": "192.168.1.10", + "ses_Info": "" + } + ] + }

+

curl Example

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/AA:BB:CC:DD:EE:FF?period=7%20days" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+
+

Session Events Summary

+
    +
  • GET /sessions/session-events → Retrieve a summary of session events
  • +
+

Query Parameters:

+
    +
  • type → Event type (all, sessions, missing, voided, new, down) + Default: all
  • +
  • period → Period to retrieve events (7 days, 1 month, etc.)
  • +
+

Example:

+

/sessions/session-events?type=all&period=7 days

+

Response: + Returns a list of events or sessions with formatted connection, disconnection, duration, and IP information.

+

curl Example

+
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/session-events?type=all&period=7%20days" \
+  -H "Authorization: Bearer <API_TOKEN>" \
+  -H "Accept: application/json"
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_SETTINGS/index.html b/API_SETTINGS/index.html new file mode 100644 index 00000000..740a495f --- /dev/null +++ b/API_SETTINGS/index.html @@ -0,0 +1,4241 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Settings - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Settings API Endpoints

+

Retrieve application settings stored in the configuration system. This endpoint is useful for quickly fetching individual settings such as API_TOKEN or TIMEZONE.

+

For bulk or structured access (all settings, schema details, or filtering), use the GraphQL API Endpoint.

+
+

Get a Setting

+
    +
  • GET /settings/<key> → Retrieve the value of a specific setting
  • +
+

Path Parameter:

+
    +
  • key → The setting key to retrieve (e.g., API_TOKEN, TIMEZONE)
  • +
+

Authorization: +Requires a valid API token in the Authorization header.

+
+

curl Example (Success)

+
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Accept: application/json'
+
+

Response:

+
{
+  "success": true,
+  "value": "my-secret-token"
+}
+
+
+

curl Example (Invalid Key)

+
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/DOES_NOT_EXIST' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Accept: application/json'
+
+

Response:

+
{
+  "success": true,
+  "value": null
+}
+
+
+

curl Example (Unauthorized)

+
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \
+  -H 'Accept: application/json'
+
+

Response:

+
{
+  "error": "Forbidden"
+}
+
+
+

Notes

+
    +
  • This endpoint is optimized for direct retrieval of a single setting.
  • +
  • For complex retrieval scenarios (listing all settings, retrieving schema metadata like setName, setDescription, setType, or checking if a setting is overridden by environment variables), use the GraphQL Settings Query:
  • +
+
curl 'http://<server_ip>:<GRAPHQL_PORT>/graphql' \
+  -X POST \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -H 'Content-Type: application/json' \
+  --data '{
+    "query": "query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }"
+  }'
+
+

See the GraphQL API Endpoint for more details.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_SYNC/index.html b/API_SYNC/index.html new file mode 100644 index 00000000..a50b7082 --- /dev/null +++ b/API_SYNC/index.html @@ -0,0 +1,4351 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Sync - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Sync API Endpoint

+
+

The /sync endpoint is used by the SYNC plugin to synchronize data between multiple NetAlertX instances (e.g., from a node to a hub). It supports both GET and POST requests.

+

9.1 GET /sync

+

Fetches data from a node to the hub. The data is returned as a base64-encoded JSON file.

+

Example Request:

+
curl 'http://<server>:<GRAPHQL_PORT>/sync' \
+  -H 'Authorization: Bearer <API_TOKEN>'
+
+

Response Example:

+
{
+  "node_name": "NODE-01",
+  "status": 200,
+  "message": "OK",
+  "data_base64": "eyJkZXZpY2VzIjogW3siZGV2TWFjIjogIjAwOjExOjIyOjMzOjQ0OjU1IiwiZGV2TmFtZSI6ICJEZXZpY2UgMSJ9XSwgImNvdW50Ijog1fQ==",
+  "timestamp": "2025-08-24T10:15:00+10:00"
+}
+
+

Notes:

+
    +
  • data_base64 contains the full JSON data encoded in Base64.
  • +
  • node_name corresponds to the SYNC_node_name setting on the node.
  • +
  • Errors (e.g., missing file) return HTTP 500 with an error message.
  • +
+
+

9.2 POST /sync

+

The POST endpoint is used by nodes to send data to the hub. The hub expects the data as form-encoded fields (application/x-www-form-urlencoded or multipart/form-data). The hub then stores the data in the plugin log folder for processing.

+

Required Fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
datastringThe payload from the plugin or devices. Typically plain text, JSON, or encrypted Base64 data. In your Python script, encrypt_data() is applied before sending.
node_namestringThe name of the node sending the data. Matches the node’s SYNC_node_name setting. Used to generate the filename on the hub.
pluginstringThe name of the plugin sending the data. Determines the filename prefix (last_result.<plugin>...).
file_pathstring (optional)Path of the local file being sent. Used only for logging/debugging purposes on the hub; not required for processing.
+
+

How the Hub Processes the POST Data

+
    +
  1. Receives the data and validates the API token.
  2. +
  3. Stores the raw payload in:
  4. +
+
INSTALL_PATH/log/plugins/last_result.<plugin>.encoded.<node_name>.<sequence>.log
+
+
    +
  • <plugin> → plugin name from the POST request.
  • +
  • <node_name> → node name from the POST request.
  • +
  • +

    <sequence> → incremented number for each submission.

    +
  • +
  • +

    Decodes / decrypts the data if necessary (Base64 or encrypted) before processing.

    +
  • +
  • +

    Processes JSON payloads (e.g., device info) to:

    +
  • +
  • +

    Avoid duplicates by tracking devMac.

    +
  • +
  • Add metadata like devSyncHubNode.
  • +
  • Insert new devices into the database.
  • +
  • Renames files to indicate they have been processed:
  • +
+
processed_last_result.<plugin>.<node_name>.<sequence>.log
+
+
+

Example POST Payload

+

If a node is sending device data:

+
curl -X POST 'http://<hub>:<PORT>/sync' \
+  -H 'Authorization: Bearer <API_TOKEN>' \
+  -F 'data={"data":[{"devMac":"00:11:22:33:44:55","devName":"Device 1","devVendor":"Vendor A","devLastIP":"192.168.1.10"}]}' \
+  -F 'node_name=NODE-01' \
+  -F 'plugin=SYNC'
+
+
    +
  • The data field contains JSON with a data array, where each element is a device object or plugin data object.
  • +
  • The plugin and node_name fields allow the hub to organize and store the file correctly.
  • +
  • The data is only processed if the relevant plugins are enabled and run on the target server.
  • +
+
+

Key Notes

+
    +
  • Always use the same plugin and node_name values for consistent storage.
  • +
  • Encrypted data: The Python script uses encrypt_data() before sending, and the hub decodes it before processing.
  • +
  • Sequence numbers: Every submission generates a new sequence, preventing overwriting previous data.
  • +
  • Form-encoded: The hub expects multipart/form-data (cURL -F) or application/x-www-form-urlencoded.
  • +
+

Storage Details:

+
    +
  • Data is stored under INSTALL_PATH/log/plugins with filenames following the pattern:
  • +
+
last_result.<plugin>.encoded.<node_name>.<sequence>.log
+
+
    +
  • Both encoded and decoded files are tracked, and new submissions increment the sequence number.
  • +
  • If storing fails, the API returns HTTP 500 with an error message.
  • +
  • The data is only processed if the relevant plugins are enabled and run on the target server.
  • +
+
+

9.3 Notes and Best Practices

+
    +
  • Authorization Required – Both GET and POST require a valid API token.
  • +
  • Data Integrity – Ensure that node_name and plugin are consistent to avoid overwriting files.
  • +
  • Monitoring – Notifications are generated whenever data is sent or received (write_notification), which can be used for alerting or auditing.
  • +
  • Use Case – Typically used in multi-node deployments to consolidate device and event data on a central hub.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/API_TESTS/index.html b/API_TESTS/index.html new file mode 100644 index 00000000..2d77b954 --- /dev/null +++ b/API_TESTS/index.html @@ -0,0 +1,4088 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Tests - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Tests

+ +

Unit Tests

+
+

Warning

+

Please note these test modify data in the database.

+
+
    +
  1. See the /test directory for available test cases. These are not exhaustive but cover the main API endpoints.
  2. +
  3. To run a test case, SSH into the container:
    +sudo docker exec -it netalertx /bin/bash
  4. +
  5. Inside the container, install pytest (if not already installed):
    +pip install pytest
  6. +
  7. Run a specific test case:
    +pytest /app/test/TESTFILE.py
  8. +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/AUTHELIA/index.html b/AUTHELIA/index.html new file mode 100644 index 00000000..ec9bbb6d --- /dev/null +++ b/AUTHELIA/index.html @@ -0,0 +1,4349 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Authelia - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Authelia

+ +

Authelia support

+
+

Warning

+

This is community contributed content and work in progress. Contributions are welcome.

+
+
theme: dark
+
+default_2fa_method: "totp"
+
+server:
+  address: 0.0.0.0:9091
+  endpoints:
+    enable_expvars: false
+    enable_pprof: false
+    authz:
+      forward-auth:
+        implementation: 'ForwardAuth'
+        authn_strategies:
+          - name: 'HeaderAuthorization'
+            schemes:
+              - 'Basic'
+          - name: 'CookieSession'
+      ext-authz:
+        implementation: 'ExtAuthz'
+        authn_strategies:
+          - name: 'HeaderAuthorization'
+            schemes:
+              - 'Basic'
+          - name: 'CookieSession'
+      auth-request:
+        implementation: 'AuthRequest'
+        authn_strategies:
+          - name: 'HeaderAuthRequestProxyAuthorization'
+            schemes:
+              - 'Basic'
+          - name: 'CookieSession'
+      legacy:
+        implementation: 'Legacy'
+        authn_strategies:
+          - name: 'HeaderLegacy'
+          - name: 'CookieSession'
+  disable_healthcheck: false
+  tls:
+    key: ""
+    certificate: ""
+    client_certificates: []
+  headers:
+    csp_template: ""
+
+log:
+  ## Level of verbosity for logs: info, debug, trace.
+  level: info
+
+###############################################################
+# The most important section
+###############################################################
+access_control:
+  ## Default policy can either be 'bypass', 'one_factor', 'two_factor' or 'deny'.
+  default_policy: deny
+  networks:
+    - name: internal
+      networks:
+        - '192.168.0.0/18'
+        - '10.10.10.0/8' # Zerotier
+    - name: private
+      networks:
+        - '172.16.0.0/12'
+  rules:
+    - networks:
+        - private
+      domain:
+        - '*'
+      policy: bypass
+    - networks:
+        - internal
+      domain:
+        - '*'
+      policy: bypass
+    - domain:
+        # exclude itself from auth, should not happen as we use Traefik middleware on a case-by-case screnario
+        - 'auth.MYDOMAIN1.TLD'
+        - 'authelia.MYDOMAIN1.TLD'
+        - 'auth.MYDOMAIN2.TLD'
+        - 'authelia.MYDOMAIN2.TLD'
+      policy: bypass
+    - domain:
+        #All subdomains match
+        - 'MYDOMAIN1.TLD'
+        - '*.MYDOMAIN1.TLD'
+      policy: two_factor
+    - domain:
+        # This will not work yet as Authelio does not support multi-domain authentication
+        - 'MYDOMAIN2.TLD'
+        - '*.MYDOMAIN2.TLD'
+      policy: two_factor
+
+
+############################################################
+identity_validation:
+  reset_password:
+    jwt_secret: "[REDACTED]"
+
+identity_providers:
+  oidc:
+    enable_client_debug_messages: true
+    enforce_pkce: public_clients_only
+    hmac_secret: [REDACTED]
+    lifespans:
+      authorize_code: 1m
+      id_token: 1h
+      refresh_token: 90m
+      access_token: 1h
+    cors:
+      endpoints:
+        - authorization
+        - token
+        - revocation
+        - introspection
+        - userinfo
+      allowed_origins:
+        - "*"
+      allowed_origins_from_client_redirect_uris: false
+    jwks:
+      - key: [REDACTED]
+        certificate_chain:
+    clients:
+      - client_id: portainer
+        client_name: Portainer
+        # generate secret with "authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric"
+        # Random Password: [REDACTED]
+        # Digest: [REDACTED]
+        client_secret: [REDACTED]
+        token_endpoint_auth_method: 'client_secret_post'
+        public: false
+        authorization_policy: two_factor
+        consent_mode: pre-configured #explicit
+        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months
+        scopes:
+          - openid
+          #- groups #Currently not supported in Authelia V
+          - email
+          - profile
+        redirect_uris:
+          - https://portainer.MYDOMAIN1.LTD
+        userinfo_signed_response_alg: none
+
+      - client_id: openproject
+        client_name: OpenProject
+        # generate secret with "authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric"
+        # Random Password: [REDACTED]
+        # Digest: [REDACTED]
+        client_secret: [REDACTED]
+        token_endpoint_auth_method: 'client_secret_basic'
+        public: false
+        authorization_policy: two_factor
+        consent_mode: pre-configured #explicit
+        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months
+        scopes:
+          - openid
+          #- groups #Currently not supported in Authelia V
+          - email
+          - profile
+        redirect_uris:
+          - https://op.MYDOMAIN.TLD
+        #grant_types:
+        #  - refresh_token
+        #  - authorization_code
+        #response_types:
+        #  - code
+        #response_modes:
+        #  - form_post
+        #  - query
+        #  - fragment
+        userinfo_signed_response_alg: none
+##################################################################
+
+
+telemetry:
+  metrics:
+    enabled: false
+    address: tcp://0.0.0.0:9959
+
+totp:
+  disable: false
+  issuer: authelia.com
+  algorithm: sha1
+  digits: 6
+  period: 30 ## The period in seconds a one-time password is valid for.
+  skew: 1
+  secret_size: 32
+
+webauthn:
+  disable: false
+  timeout: 60s ## Adjust the interaction timeout for Webauthn dialogues.
+  display_name: Authelia
+  attestation_conveyance_preference: indirect
+  user_verification: preferred
+
+ntp:
+  address: "pool.ntp.org"
+  version: 4
+  max_desync: 5s
+  disable_startup_check: false
+  disable_failure: false
+
+authentication_backend:
+  password_reset:
+    disable: false
+    custom_url: ""
+  refresh_interval: 5m
+  file:
+    path: /config/users_database.yml
+    watch: true
+    password:
+      algorithm: argon2
+      argon2:
+        variant: argon2id
+        iterations: 3
+        memory: 65536
+        parallelism: 4
+        key_length: 32
+        salt_length: 16
+
+password_policy:
+  standard:
+    enabled: false
+    min_length: 8
+    max_length: 0
+    require_uppercase: true
+    require_lowercase: true
+    require_number: true
+    require_special: true
+  ## zxcvbn is a well known and used password strength algorithm. It does not have tunable settings.
+  zxcvbn:
+    enabled: false
+    min_score: 3
+
+regulation:
+  max_retries: 3
+  find_time: 2m
+  ban_time: 5m
+
+session:
+  name: authelia_session
+  secret: [REDACTED]
+  expiration: 60m
+  inactivity: 15m
+  cookies:
+    - domain: 'MYDOMAIN1.LTD'
+      authelia_url: 'https://auth.MYDOMAIN1.LTD'
+      name: 'authelia_session'
+      default_redirection_url: 'https://MYDOMAIN1.LTD'
+    - domain: 'MYDOMAIN2.LTD'
+      authelia_url: 'https://auth.MYDOMAIN2.LTD'
+      name: 'authelia_session_other'
+      default_redirection_url: 'https://MYDOMAIN2.LTD'
+
+storage:
+  encryption_key: [REDACTED]
+  local:
+    path: /config/db.sqlite3
+
+notifier:
+  disable_startup_check: true
+  smtp:
+    address: MYOTHERDOMAIN.LTD:465
+    timeout: 5s
+    username: "USER@DOMAIN"
+    password: "[REDACTED]"
+    sender: "Authelia <postmaster@MYOTHERDOMAIN.LTD>"
+    identifier: NAME@MYOTHERDOMAIN.LTD
+    subject: "[Authelia] {title}"
+    startup_check_address: postmaster@MYOTHERDOMAIN.LTD
+
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/BACKUPS/index.html b/BACKUPS/index.html new file mode 100644 index 00000000..080b65e8 --- /dev/null +++ b/BACKUPS/index.html @@ -0,0 +1,4608 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Backups - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Backing Things Up

+
+

Note

+

To back up 99% of your configuration, back up at least the /data/config folder. +Database definitions can change between releases, so the safest method is to restore backups using the same app version they were taken from, then upgrade incrementally.

+
+
+

What to Back Up

+

There are four key artifacts you can use to back up your NetAlertX configuration:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FileDescriptionLimitations
/db/app.dbThe application databaseMight be in an uncommitted state or corrupted
/config/app.confConfiguration fileCan be overridden using the APP_CONF_OVERRIDE variable
/config/devices.csvCSV file containing device dataDoes not include historical data
/config/workflows.jsonJSON file containing your workflowsN/A
+
+

Where the Data Lives

+

Understanding where your data is stored helps you plan your backup strategy.

+

Core Configuration

+

Stored in /data/config/app.conf. +This includes settings for:

+
    +
  • Notifications
  • +
  • Scanning
  • +
  • Scheduled maintenance
  • +
  • UI preferences
  • +
+

(See Settings System for details.)

+

Device Data

+

Stored in /data/config/devices_<timestamp>.csv or /data/config/devices.csv, created by the CSV Backup CSVBCKP Plugin. +Contains:

+
    +
  • Device names, icons, and categories
  • +
  • Network configuration
  • +
  • Custom properties
  • +
+

Historical Data

+

Stored in /data/db/app.db (see Database Overview). +Contains:

+
    +
  • Plugin data and historical entries
  • +
  • Event and notification history
  • +
  • Device presence history
  • +
+
+

Backup Strategies

+

The safest approach is to back up both the /db and /config folders regularly. Tools like Kopia make this simple and efficient.

+

If you can only keep a few files, prioritize:

+
    +
  1. The latest devices_<timestamp>.csv or devices.csv
  2. +
  3. app.conf
  4. +
  5. workflows.json
  6. +
+

You can also download the app.conf and devices.csv files from the Maintenance section:

+

Backup and Restore Section in Maintenance

+
+

Scenario 1: Full Backup and Restore

+

Goal: Full recovery of your configuration and data.

+

💾 What to Back Up

+
    +
  • /data/db/app.db (uncorrupted)
  • +
  • /data/config/app.conf
  • +
  • /data/config/workflows.json
  • +
+

📥 How to Restore

+

Map these files into your container as described in the Setup documentation.

+
+

Scenario 2: Corrupted Database

+

Goal: Recover configuration and device data when the database is lost or corrupted.

+

💾 What to Back Up

+
    +
  • /data/config/app.conf
  • +
  • /data/config/workflows.json
  • +
  • /data/config/devices_<timestamp>.csv (rename to devices.csv during restore)
  • +
+

📥 How to Restore

+
    +
  1. Copy app.conf and workflows.json into /data/config/
  2. +
  3. Rename and place devices_<timestamp>.csv/data/config/devices.csv
  4. +
  5. Restore via the Maintenance section under Devices → Bulk Editing
  6. +
+

This recovers nearly all configuration, workflows, and device metadata.

+
+

Docker-Based Backup and Restore

+

For users running NetAlertX via Docker, you can back up or restore directly from your host system — a convenient and scriptable option.

+

Full Backup (File-Level)

+
    +
  1. Stop the container:
  2. +
+

bash + docker stop netalertx

+
    +
  1. Create a compressed archive of your configuration and database volumes:
  2. +
+

bash + docker run --rm -v local_path/config:/config -v local_path/db:/db alpine tar -cz /config /db > netalertx-backup.tar.gz

+
    +
  1. Restart the container:
  2. +
+

bash + docker start netalertx

+

Restore from Backup

+
    +
  1. Stop the container:
  2. +
+

bash + docker stop netalertx

+
    +
  1. Restore from your backup file:
  2. +
+

bash + docker run --rm -i -v local_path/config:/config -v local_path/db:/db alpine tar -C / -xz < netalertx-backup.tar.gz

+
    +
  1. Restart the container:
  2. +
+

bash + docker start netalertx

+
+

This approach uses a temporary, minimal alpine container to access Docker-managed volumes. The tar command creates or extracts an archive directly from your host’s filesystem, making it fast, clean, and reliable for both automation and manual recovery.

+
+
+

Summary

+
    +
  • Back up /data/config for configuration and devices; /data/db for history
  • +
  • Keep regular backups, especially before upgrades
  • +
  • For Docker setups, use the lightweight alpine-based backup method for consistency and portability
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/BUILDS/index.html b/BUILDS/index.html new file mode 100644 index 00000000..e25a6e68 --- /dev/null +++ b/BUILDS/index.html @@ -0,0 +1,4412 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Builds - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

NetAlertX Builds: Choose Your Path

+

NetAlertX provides different installation methods for different needs. This guide helps you choose the right path for security, experimentation, or development.

+

1. Hardened Appliance (Default Production)

+
+

Note

+

Use this image if: You want to use NetAlertX securely.

+
+

Who is this for?

+

All users who want a stable, secure, "set-it-and-forget-it" appliance.

+

Methodology

+
    +
  • Multi-stage Alpine build
  • +
  • Aggressively "amputated"
  • +
  • Locked down for max security
  • +
+

Source

+

Dockerfile (hardened target)

+

2. "Tinkerer's" Image (Insecure VM-Style)

+
+

Note

+

Use this image if: You want to experiment with NetAlertX.

+
+

Who is this for?

+

Power users, developers, and "tinkerers" wanting a familiar "VM-like" experience.

+

Methodology

+
    +
  • Traditional Debian build
  • +
  • Includes full un-hardened OS
  • +
  • Contains apt, sudo, git
  • +
+

Source

+

Dockerfile.debian

+

3. Contributor's Devcontainer (Project Developers)

+
+

Note

+

Use this image if: You want to develop NetAlertX itself.

+
+

Who is this for?

+

Project contributors who are actively writing and debugging code for NetAlertX.

+

Methodology

+
    +
  • Builds FROM runner stage
  • +
  • Loaded by VS Code
  • +
  • Full debug tools: xdebug, pytest
  • +
+

Source

+

Dockerfile (devcontainer target)

+

Visualizing the Trade-Offs

+

This chart compares the three builds across key attributes. A higher score means "more of" that attribute. Notice the clear trade-offs between security and development features.

+

tradeoffs

+

Build Process & Origins

+

The final images originate from two different files and build paths. The main Dockerfile uses stages to create both the hardened and development container images.

+

Official Build Path

+

Dockerfile -> builder (Stage 1) -> runner (Stage 2) -> hardened (Final Stage) (Production Image) + devcontainer (Final Stage) (Developer Image)

+

Legacy Build Path

+

Dockerfile.debian -> "Tinkerer's" Image (Insecure VM-Style Image)

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/COMMON_ISSUES/index.html b/COMMON_ISSUES/index.html new file mode 100644 index 00000000..1b875ade --- /dev/null +++ b/COMMON_ISSUES/index.html @@ -0,0 +1,4968 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Common issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Troubleshooting Common Issues

+
+

Tip

+

Before troubleshooting, ensure you have set the correct Debugging and LOG_LEVEL.

+
+
+

Docker Container Doesn't Start

+

Initial setup issues are often caused by missing permissions or incorrectly mapped volumes. Always double-check your docker run or docker-compose.yml against the official setup guide before proceeding.

+

Permissions

+

Make sure your file permissions are correctly set:

+
    +
  • If you encounter AJAX errors, cannot write to the database, or see an empty screen, check that permissions are correct and review the logs under /tmp/log.
  • +
  • To fix permission issues with the database, update the owner and group of app.db as described in the File Permissions guide.
  • +
+

Container Restarts / Crashes

+
    +
  • Check the logs for details. Often, required settings are missing.
  • +
  • For more detailed troubleshooting, see Debug and Troubleshooting Tips.
  • +
  • To observe errors directly, run the container in the foreground instead of -d:
  • +
+
docker run --rm -it <your_image>
+
+
+

Docker Container Starts, But the Application Misbehaves

+

If the container starts but the app shows unexpected behavior, the cause is often data corruption, incorrect configuration, or unexpected input data.

+

Continuous "Loading..." Screen

+

A misconfigured application may display a persistent Loading... dialog. This is usually caused by the backend failing to start.

+

Steps to troubleshoot:

+
    +
  1. Check Maintenance → Logs for exceptions.
  2. +
  3. If no exception is visible, check the Portainer logs.
  4. +
  5. Start the container in the foreground to observe exceptions.
  6. +
  7. Enable trace or debug logging for detailed output (see Debug Tips).
  8. +
  9. Verify that GRAPHQL_PORT is correctly configured.
  10. +
  11. +

    Check browser logs (press F12):

    +
  12. +
  13. +

    Console tab → refresh the page

    +
  14. +
  15. Network tab → refresh the page
  16. +
+

If you are unsure how to resolve errors, provide screenshots or log excerpts in your issue report or Discord discussion.

+
+

Common Configuration Issues

+

Incorrect SCAN_SUBNETS

+

If SCAN_SUBNETS is misconfigured, you may see only a few devices in your device list after a scan. See the Subnets Documentation for proper configuration.

+

Duplicate Devices and Notifications

+
    +
  • Devices are identified by their MAC address.
  • +
  • If a device's MAC changes, it will be treated as a new device, triggering notifications.
  • +
  • Prevent this by adjusting your device configuration for Android, iOS, or Windows. See the Random MACs Guide.
  • +
+

Unable to Resolve Host

+
    +
  • Ensure SCAN_SUBNETS uses the correct mask and --interface.
  • +
  • Refer to the Subnets Documentation for detailed guidance.
  • +
+

Invalid JSON Errors

+ +

Sudo Execution Fails (e.g., on arpscan on Raspberry Pi 4)

+

Error:

+
sudo: unexpected child termination condition: 0
+
+

Resolution:

+
wget ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.3-2_armhf.deb
+sudo dpkg -i libseccomp2_2.5.3-2_armhf.deb
+
+
+

⚠️ The link may break over time. Check Debian Packages for the latest version.

+
+

Only Router and Own Device Show Up

+
    +
  • Verify the subnet and interface in SCAN_SUBNETS.
  • +
  • On devices with multiple Ethernet ports, you may need to change eth0 to the correct interface.
  • +
+

Losing Settings or Devices After Update

+
    +
  • Ensure /data/db and /data/config are mapped to persistent storage.
  • +
  • Without persistent volumes, these folders are recreated on every update.
  • +
  • See Docker Volumes Setup for proper configuration.
  • +
+

Application Performance Issues

+

Slowness can be caused by:

+
    +
  • Incorrect settings (causing app restarts) → check app.log.
  • +
  • Too many background processes → disable unnecessary scanners.
  • +
  • Long scans → limit the number of scanned devices.
  • +
  • Excessive disk operations or failing maintenance plugins.
  • +
+
+

See Performance Tips for detailed optimization steps.

+
+

IP flipping

+

With ARPSCAN scans some devices might flip IP addresses after each scan triggering false notifications. This is because some devices respond to broadcast calls and thus different IPs after scans are logged.

+

See how to prevent IP flipping in the ARPSCAN plugin guide.

+

Alternatively adjust your notification settings to prevent false positives by filtering out events or devices.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/COMMUNITY_GUIDES/index.html b/COMMUNITY_GUIDES/index.html new file mode 100644 index 00000000..dc740162 --- /dev/null +++ b/COMMUNITY_GUIDES/index.html @@ -0,0 +1,4024 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Community Guides - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + + + + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/CUSTOM_PROPERTIES/index.html b/CUSTOM_PROPERTIES/index.html new file mode 100644 index 00000000..ecfafef1 --- /dev/null +++ b/CUSTOM_PROPERTIES/index.html @@ -0,0 +1,4332 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Custom Properties - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Custom Properties for Devices

+

Custom Properties

+

Overview

+

This functionality allows you to define custom properties for devices, which can store and display additional information on the device listing page. By marking properties as "Show", you can enhance the user interface with quick actions, notes, or external links.

+

Key Features:

+
    +
  • Customizable Properties: Define specific properties for each device.
  • +
  • Visibility Control: Choose which properties are displayed on the device listing page.
  • +
  • Interactive Elements: Include actions like links, modals, and device management directly in the interface.
  • +
+
+

Defining Custom Properties

+

Custom properties are structured as a list of objects, where each property includes the following fields:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
CUSTPROP_iconThe icon (Base64-encoded HTML) displayed for the property.
CUSTPROP_typeThe action type (e.g., show_notes, link, delete_dev).
CUSTPROP_nameA short name or title for the property.
CUSTPROP_argsArguments for the action (e.g., URL or modal text).
CUSTPROP_notesAdditional notes or details displayed when applicable.
CUSTPROP_showA boolean to control visibility (true to show on the listing page).
+
+

Available Action Types

+
    +
  • Show Notes: Displays a modal with a title and additional notes.
  • +
  • Example: Show firmware details or custom messages.
  • +
  • Link: Redirects to a specified URL in the current browser tab. (Arguments Needs to contain the full URL.)
  • +
  • Link (New Tab): Opens a specified URL in a new browser tab. (Arguments Needs to contain the full URL.)
  • +
  • Delete Device: Deletes the device using its MAC address.
  • +
  • Run Plugin: Placeholder for executing custom plugins (not implemented yet).
  • +
+
+

Usage on the Device Listing Page

+

Custom Properties

+

Visible properties (CUSTPROP_show: true) are displayed as interactive icons in the device listing. Each icon can perform one of the following actions based on the CUSTPROP_type:

+
    +
  1. Modals (e.g., Show Notes):
  2. +
  3. Displays detailed information in a popup modal.
  4. +
  5. +

    Example: Firmware version details.

    +
  6. +
  7. +

    Links:

    +
  8. +
  9. Redirect to an external or internal URL.
  10. +
  11. +

    Example: Open a device's documentation or external site.

    +
  12. +
  13. +

    Device Actions:

    +
  14. +
  15. Manage devices with actions like delete.
  16. +
  17. +

    Example: Quickly remove a device from the network.

    +
  18. +
  19. +

    Plugins:

    +
  20. +
  21. Future placeholder for running custom plugin scripts.
  22. +
  23. Note: Not implemented yet.
  24. +
+
+

Example Use Cases

+
    +
  1. Device Documentation Link:
  2. +
  3. +

    Add a custom property with CUSTPROP_type set to link or link_new_tab to allow quick navigation to the external documentation of the device.

    +
  4. +
  5. +

    Firmware Details:

    +
  6. +
  7. +

    Use CUSTPROP_type: show_notes to display firmware versions or upgrade instructions in a modal.

    +
  8. +
  9. +

    Device Removal:

    +
  10. +
  11. Enable device removal functionality using CUSTPROP_type: delete_dev.
  12. +
+
+

Notes

+
    +
  • Plugin Functionality: The run_plugin action type is currently not implemented and will show an alert if used.
  • +
  • Custom Icons (Experimental 🧪): Use Base64-encoded HTML to provide custom icons for each property. You can add your icons in Setttings via the CUSTPROP_icon settings
  • +
  • Visibility Control: Only properties with CUSTPROP_show: true will appear on the listing page.
  • +
+

This feature provides a flexible way to enhance device management and display with interactive elements tailored to your needs.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DATABASE/index.html b/DATABASE/index.html new file mode 100644 index 00000000..2de12cd1 --- /dev/null +++ b/DATABASE/index.html @@ -0,0 +1,4357 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Database - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

A high-level description of the database structure

+

An overview of the most important database tables as well as an detailed overview of the Devices table. The MAC address is used as a foreign key in most cases.

+

Devices database table

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameDescriptionSample Value
devMacMAC address of the device.00:1A:2B:3C:4D:5E
devNameName of the device.iPhone 12
devOwnerOwner of the device.John Doe
devTypeType of the device (e.g., phone, laptop, etc.). If set to a network type (e.g., switch), it will become selectable as a Network Parent Node.Laptop
devVendorVendor/manufacturer of the device.Apple
devFavoriteWhether the device is marked as a favorite.1
devGroupGroup the device belongs to.Home Devices
devCommentsUser comments or notes about the device.Used for work purposes
devFirstConnectionTimestamp of the device's first connection.2025-03-22 12:07:26+11:00
devLastConnectionTimestamp of the device's last connection.2025-03-22 12:07:26+11:00
devLastIPLast known IP address of the device.192.168.1.5
devStaticIPWhether the device has a static IP address.0
devScanWhether the device should be scanned.1
devLogEventsWhether events related to the device should be logged.0
devAlertEventsWhether alerts should be generated for events.1
devAlertDownWhether an alert should be sent when the device goes down.0
devSkipRepeatedWhether to skip repeated alerts for this device.1
devLastNotificationTimestamp of the last notification sent for this device.2025-03-22 12:07:26+11:00
devPresentLastScanWhether the device was present during the last scan.1
devIsNewWhether the device is marked as new.0
devLocationPhysical or logical location of the device.Living Room
devIsArchivedWhether the device is archived.0
devParentMACMAC address of the parent device (if applicable) to build the Network Tree.00:1A:2B:3C:4D:5F
devParentPortPort of the parent device to which this device is connected.Port 3
devIconIcon representing the device. The value is a base64-encoded SVG or Font Awesome HTML tag.PHN2ZyB...
devGUIDUnique identifier for the device.a2f4b5d6-7a8c-9d10-11e1-f12345678901
devSiteSite or location where the device is registered.Office
devSSIDSSID of the Wi-Fi network the device is connected to.HomeNetwork
devSyncHubNodeThe NetAlertX node ID used for synchronization between NetAlertX instances.node_1
devSourcePluginSource plugin that discovered the device.ARPSCAN
devCustomPropsCustom properties related to the device. The value is a base64-encoded JSON object.PHN2ZyB...
devFQDNFully qualified domain name.raspberrypi.local
devParentRelTypeThe type of relationship between the current device and it's parent node. By default, selecting nic will hide it from lists.nic
devReqNicsOnlineIf all NICs are required to be online to mark teh current device online.0
+

To understand how values of these fields influuence application behavior, such as Notifications or Network topology, see also:

+ +

Other Tables overview

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table nameDescriptionSample data
CurrentScanResult of the current scanScreen1
DevicesThe main devices database that also contains the Network tree mappings. If ScanCycle is set to 0 device is not scanned.Screen2
EventsUsed to collect connection/disconnection events.Screen4
Online_HistoryUsed to display the Device presence chartScreen6
ParametersUsed to pass values between the frontend and backend.Screen7
Plugins_EventsFor capturing events exposed by a plugin via the last_result.log file. If unique then saved into the Plugins_Objects table. Entries are deleted once processed and stored in the Plugins_History and/or Plugins_Objects tables.Screen10
Plugins_HistoryHistory of all entries from the Plugins_Events tableScreen11
Plugins_Language_StringsLanguage strings collected from the plugin config.json files used for string resolution in the frontend.Screen12
Plugins_ObjectsUnique objects detected by individual plugins.Screen13
SessionsUsed to display sessions in the chartsScreen15
SettingsDatabase representation of the sum of all settings from app.conf and plugins coming from config.json files.Screen16
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEBUG_API_SERVER/index.html b/DEBUG_API_SERVER/index.html new file mode 100644 index 00000000..4ba4547d --- /dev/null +++ b/DEBUG_API_SERVER/index.html @@ -0,0 +1,4276 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + API Server Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Debugging GraphQL server issues

+

The GraphQL server is an API middle layer, running on it's own port specified by GRAPHQL_PORT, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the API documentation for details.

+

The most common issue is that the GraphQL server doesn't start properly, usually due to a port conflict. If you are running multiple NetAlertX instances, make sure to use unique ports by changing the GRAPHQL_PORT setting. The default is 20212.

+

How to update the GRAPHQL_PORT in case of issues

+

As a first troubleshooting step try changing the default GRAPHQL_PORT setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.

+

Updating the setting via the Settings UI

+

Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:

+

GrapQL settings

+

You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The API_TOKEN is used to authenticate any API calls, including GraphQL requests.

+

Updating the app.conf file

+

If the UI is not accessible, you can directly edit the app.conf file in your /config folder:

+

Editing app.conf

+

Using a docker variable

+

All application settings can also be initialized via the APP_CONF_OVERRIDE docker env variable.

+
...
+ environment:
+      - PORT=20213
+      - APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"}
+...
+
+

How to check the GraphQL server is running?

+

There are several ways to check if the GraphQL server is running.

+

Init Check

+

You can navigate to Maintenance -> Init Check to see if isGraphQLServerRunning is ticked:

+

Init Check

+

Checking the Logs

+

You can navigate to Maintenance -> Logs and search for graphql to see if it started correctly and serving requests:

+

GraphQL Logs

+

Inspecting the Browser console

+

In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).

+

Browser Network Tab

+

You can then inspect any of the POST requests by opening them in a new tab.

+

Browser GraphQL Json

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEBUG_INVALID_JSON/index.html b/DEBUG_INVALID_JSON/index.html new file mode 100644 index 00000000..ebc7b132 --- /dev/null +++ b/DEBUG_INVALID_JSON/index.html @@ -0,0 +1,4136 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Invalid JSON Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

How to debug the Invalid JSON response error

+

Check the the HTTP response of the failing backend call by following these steps:

+
    +
  • Open developer console in your browser (usually, e. g. for Chrome, key F12 on the keyboard).
  • +
  • Follow the steps in this screenshot:
  • +
+

F12DeveloperConsole

+
    +
  • Copy the URL causing the error and enter it in the address bar of your browser directly and hit enter. The copied URLs could look something like this (notice the query strings at the end):
  • +
  • http://<server>:20211/api/table_devices.json?nocache=1704141103121
  • +
  • +

    http://<server>:20211/php/server/devices.php?action=getDevicesTotals

    +
  • +
  • +

    Post the error response in the existing issue thread on GitHub or create a new issue and include the redacted response of the failing query.

    +
  • +
+

For reference, the above queries should return results in the following format:

+

First URL:

+
    +
  • Should yield a valid JSON file
  • +
+

Second URL:

+

array

+

Third URL:

+

json

+

You can copy and paste any JSON result (result of the First and Third query) into an online JSON checker, such as this one to check if it's valid.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEBUG_PHP/index.html b/DEBUG_PHP/index.html new file mode 100644 index 00000000..234143f6 --- /dev/null +++ b/DEBUG_PHP/index.html @@ -0,0 +1,4144 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + PHP Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Debugging backend PHP issues

+

Logs in UI

+

Logs UI

+

You can view recent backend PHP errors directly in the Maintenance > Logs section of the UI. This provides quick access to logs without needing terminal access.

+

Accessing logs directly

+

Sometimes, the UI might not be accessible. In that case, you can access the logs directly inside the container.

+

Step-by-step:

+
    +
  1. Open a shell into the container:
  2. +
+

bash + docker exec -it netalertx /bin/sh

+
    +
  1. Check the NGINX error log:
  2. +
+

bash + cat /var/log/nginx/error.log

+
    +
  1. Check the PHP application error log:
  2. +
+

bash + cat /tmp/log/app.php_errors.log

+

These logs will help identify syntax issues, fatal errors, or startup problems when the UI fails to load properly.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEBUG_PLUGINS/index.html b/DEBUG_PLUGINS/index.html new file mode 100644 index 00000000..c4f4b13a --- /dev/null +++ b/DEBUG_PLUGINS/index.html @@ -0,0 +1,4258 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Plugin Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Troubleshooting plugins

+
+

Tip

+

Before troubleshooting, please ensure you have the right Debugging and LOG_LEVEL set.

+
+

High-level overview

+

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/).

+

For a more in-depth overview on how plugins work check the Plugins development docs.

+

Prerequisites

+ +

Potential issues

+
    +
  • Bugs
  • +
  • Unexpected input (e.g. special characters in names)
  • +
  • Dependencies changed how data is output
  • +
+

Incorrect input data

+

Input data from the plugin might cause mapping issues in specific edge cases. Look for a corresponding section in the app.log file, for example notice the first line of the execution run of the PIHOLE plugin below:

+
17:31:05 [Scheduler] - Scheduler run for PIHOLE: YES
+17:31:05 [Plugin utils] ---------------------------------------------
+17:31:05 [Plugin utils] display_name: PiHole (Device sync)
+17:31:05 [Plugins] CMD: SELECT n.hwaddr AS Object_PrimaryID, {s-quote}null{s-quote} AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, {s-quote}null{s-quote} AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE {s-quote}ip-%{s-quote} AND n.hwaddr is not {s-quote}00:00:00:00:00:00{s-quote}  AND na.ip is not null
+17:31:05 [Plugins] setTyp: subnets
+17:31:05 [Plugin utils] Flattening the below array
+17:31:05 ['192.168.1.0/24 --interface=eth1']
+17:31:05 [Plugin utils] isinstance(arr, list) : False | isinstance(arr, str) : True
+17:31:05 [Plugins] Resolved value: 192.168.1.0/24 --interface=eth1
+17:31:05 [Plugins] Convert to Base64: True
+17:31:05 [Plugins] base64 value: b'MTkyLjE2OC4xLjAvMjQgLS1pbnRlcmZhY2U9ZXRoMQ=='
+17:31:05 [Plugins] Timeout: 10
+17:31:05 [Plugins] Executing: SELECT n.hwaddr AS Object_PrimaryID, 'null' AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, 'null' AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE 'ip-%' AND n.hwaddr is not '00:00:00:00:00:00'  AND na.ip is not null
+🔻
+17:31:05 [Plugins] SUCCESS, received 2 entries
+17:31:05 [Plugins] sqlParam entries: [(0, 'PIHOLE', '01:01:01:01:01:01', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'not-processed', 'null', 'null', '01:01:01:01:01:01'), (0, 'PIHOLE', '02:42:ac:1e:00:02', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'not-processed', 'null', 'null', '02:42:ac:1e:00:02')]
+17:31:05 [Plugins] Processing        : PIHOLE
+17:31:05 [Plugins] Existing objects from Plugins_Objects: 4
+17:31:05 [Plugins] Logged events from the plugin run    : 2
+17:31:05 [Plugins] pluginEvents      count: 2
+17:31:05 [Plugins] pluginObjects     count: 4
+17:31:05 [Plugins] events_to_insert  count: 0
+17:31:05 [Plugins] history_to_insert count: 4
+17:31:05 [Plugins] objects_to_insert count: 0
+17:31:05 [Plugins] objects_to_update count: 4
+17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status "watched-not-changed"
+17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "missing-in-last-scan"
+17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "watched-not-changed"
+17:31:05 [Plugins] Mapping objects to database table: CurrentScan
+17:31:05 [Plugins] SQL query for mapping: INSERT into CurrentScan ( "cur_MAC", "cur_IP", "cur_LastQuery", "cur_Name", "cur_Vendor", "cur_ScanMethod") VALUES ( ?, ?, ?, ?, ?, ?)
+17:31:05 [Plugins] SQL sqlParams for mapping: [('01:01:01:01:01:01', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'PIHOLE'), ('02:42:ac:1e:00:02', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'PIHOLE')]
+🔺
+17:31:05 [API] Update API starting
+17:31:06 [API] Updating table_plugins_history.json file in /api
+
+
+

The debug output between the 🔻red arrows🔺 is important for debugging (arrows added only to highlight the section on this page, they are not available in the actual debug log)

+
+

In the above output notice the section logging how many events are produced by the plugin:

+
17:31:05 [Plugins] Existing objects from Plugins_Objects: 4
+17:31:05 [Plugins] Logged events from the plugin run    : 2
+17:31:05 [Plugins] pluginEvents      count: 2
+17:31:05 [Plugins] pluginObjects     count: 4
+17:31:05 [Plugins] events_to_insert  count: 0
+17:31:05 [Plugins] history_to_insert count: 4
+17:31:05 [Plugins] objects_to_insert count: 0
+17:31:05 [Plugins] objects_to_update count: 4
+
+

These values, if formatted correctly, will also show up in the UI:

+

Plugins table

+

Sharing application state

+

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

+
    +
  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. +
  3. Wait for the issue to occur.
  4. +
  5. Search for ================ DEVICES table content ================ in your logs.
  6. +
  7. Search for ================ CurrentScan table content ================ in your logs.
  8. +
  9. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  10. +
  11. Please set LOG_LEVEL to debug or lower.
  12. +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEBUG_TIPS/index.html b/DEBUG_TIPS/index.html new file mode 100644 index 00000000..6d88dab9 --- /dev/null +++ b/DEBUG_TIPS/index.html @@ -0,0 +1,4251 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + General Tips - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Debugging and troubleshooting

+

Please follow tips 1 - 4 to get a more detailed error.

+

1. More Logging

+

When debugging an issue always set the highest log level:

+

LOG_LEVEL='trace'

+

2. Surfacing errors when container restarts

+

Start the container via the terminal with a command similar to this one:

+
docker run \
+  --network=host \
+  --restart unless-stopped \
+  -v /local_data_dir:/data \
+  -v /etc/localtime:/etc/localtime:ro \
+  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
+  -e PORT=20211 \
+  -e APP_CONF_OVERRIDE='{"GRAPHQL_PORT":"20214"}' \
+  ghcr.io/jokob-sk/netalertx:latest
+
+
+

Note: Your /local_data_dir should contain a config and db folder.

+
+

Note

+

⚠ The most important part is NOT to use the -d parameter so you see the error when the container crashes. Use this error in your issue description.

+
+

3. Check the _dev image and open issues

+

If possible, check if your issue got fixed in the _dev image before opening a new issue. The container is:

+

ghcr.io/jokob-sk/netalertx-dev:latest

+
+

⚠ Please backup your DB and config beforehand!

+
+

Please also search open issues.

+

4. Disable restart behavior

+

To prevent a Docker container from automatically restarting in a Docker Compose file, specify the restart policy as no:

+
version: '3'
+
+services:
+  your-service:
+    image: your-image:tag
+    restart: no
+    # Other service configurations...
+
+

5. TMP mount directories to rule host out permission issues

+

Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server. See teh Permissions guide for details.

+

6. Sharing application state

+

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

+
    +
  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. +
  3. Wait for the issue to occur.
  4. +
  5. Search for ================ DEVICES table content ================ in your logs.
  6. +
  7. Search for ================ CurrentScan table content ================ in your logs.
  8. +
  9. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  10. +
  11. Please set LOG_LEVEL to debug or lower.
  12. +
+

Common issues

+

See Common issues for additional troubleshooting tips.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEVICES_BULK_EDITING/index.html b/DEVICES_BULK_EDITING/index.html new file mode 100644 index 00000000..f6988441 --- /dev/null +++ b/DEVICES_BULK_EDITING/index.html @@ -0,0 +1,4163 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Bulk Editing - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Editing multiple devices at once

+

NetAlertX allows you to mass-edit devices via a CSV export and import feature, or directly in the UI.

+

UI multi edit

+
+

Note

+

Make sure you have your backups saved and restorable before doing any mass edits. Check Backup strategies.

+
+

You can select devices in the Devices view by selecting devices to edit and then clicking the Multi-edit button or via the Maintenance > Multi-Edit section.

+

Maintenance > Multi-edit

+

CSV bulk edit

+

The database and device structure may change with new releases. When using the CSV import functionality, ensure the format matches what the application expects. To avoid issues, you can first export the devices and review the column formats before importing any custom data.

+
+

Note

+

As always, backup everything, just in case.

+
+
    +
  1. In Maintenance > Backup / Restore click the CSV Export button.
  2. +
  3. A devices.csv is generated in the /config folder
  4. +
  5. Edit the devices.csv file however you like.
  6. +
+

Maintenance > CSV Export

+
+

Note

+

The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: <server>:20211/php/server/devices.php?action=ExportCSV or via the CSV Backup plugin. (💡 You can schedule this)

+
+

Settings > CSV Backup

+

File encoding format

+
+

Note

+

Keep Linux line endings (suggested editors: Nano, Notepad++)

+
+

Nodepad++ line endings

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEVICE_DISPLAY_SETTINGS/index.html b/DEVICE_DISPLAY_SETTINGS/index.html new file mode 100644 index 00000000..93bae5a9 --- /dev/null +++ b/DEVICE_DISPLAY_SETTINGS/index.html @@ -0,0 +1,4088 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Device Display Settings - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Device Display Settings

+

This set of settings allows you to group Devices under different views. The Archived toggle allows you to exclude a Device from most listings and notifications.

+

Display settings

+

Status Colors

+

Sattus colors

+
    +
  1. 🔌 Online (Green) = A device that is no longer marked as a "New Device".
  2. +
  3. 🔌 New (Green) = A newly discovered device that is online and is still marked as a "New Device".
  4. +
  5. ✖ New (Grey) = Same as No.2 but device is now offline.
  6. +
  7. ✖ Offline (Grey) = A device that was not detected online in the last scan.
  8. +
  9. ⚠ Down (Red) = A device that has "Alert Down" marked and has been offline for the time set in the Setting NTFPRCS_alert_down_time.
  10. +
+

See also Notification guide.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEVICE_HEURISTICS/index.html b/DEVICE_HEURISTICS/index.html new file mode 100644 index 00000000..a86450f2 --- /dev/null +++ b/DEVICE_HEURISTICS/index.html @@ -0,0 +1,4457 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Icon and Type guessing - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Device Heuristics: Icon and Type Guessing

+

This module is responsible for inferring the most likely device type and icon based on minimal identifying data like MAC address, vendor, IP, or device name.

+

It does this using a set of heuristics defined in an external JSON rules file, which it evaluates in priority order.

+
+

Note

+

You can find the full source code of the heuristics module in the device_heuristics.py file.

+
+
+

JSON Rule Format

+

Rules are defined in a file called device_heuristics_rules.json (located under /back), structured like:

+
[
+  {
+    "dev_type": "Phone",
+    "icon_html": "<i class=\"fa-brands fa-apple\"></i>",
+    "matching_pattern": [
+      { "mac_prefix": "001A79", "vendor": "Apple" }
+    ],
+    "name_pattern": ["iphone", "pixel"]
+  }
+]
+
+
+

Note

+

Feel free to raise a PR in case you'd like to add any rules into the device_heuristics_rules.json file. Please place new rules into the correct position and consider the priority of already available rules.

+
+

Supported fields:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
dev_typestringType to assign if rule matches (e.g. "Gateway", "Phone")
icon_htmlstringIcon (HTML string) to assign if rule matches. Encoded to base64 at load time.
matching_patternarrayList of { mac_prefix, vendor } objects for first strict and then loose matching
name_patternarray (optional)List of lowercase substrings (used with regex)
ip_patternarray (optional)Regex patterns to match IPs
+

Order in this array defines priority — rules are checked top-down and short-circuit on first match.

+
+

Matching Flow (in Priority Order)

+

The function guess_device_attributes(...) runs a series of matching functions in strict order:

+
    +
  1. MAC + Vendor → match_mac_and_vendor()
  2. +
  3. Vendor only → match_vendor()
  4. +
  5. Name pattern → match_name()
  6. +
  7. IP pattern → match_ip()
  8. +
  9. Final fallback → defaults defined in the NEWDEV_devIcon and NEWDEV_devType settings.
  10. +
+
+

Note

+

The app will try guessing the device type or icon if devType or devIcon are "" or "null".

+
+

Use of default values

+

The guessing process runs for every device as long as the current type or icon still matches the default values. Even if earlier heuristics return a match, the system continues evaluating additional clues — like name or IP — to try and replace placeholders.

+
# Still considered a match attempt if current values are defaults
+if (not type_ or type_ == default_type) or (not icon or icon == default_icon):
+    type_, icon = match_ip(ip, default_type, default_icon)
+
+

In other words: if the type or icon is still "unknown" (or matches the default), the system assumes the match isn’t final — and keeps looking. It stops only when both values are non-default (defaults are defined in the NEWDEV_devIcon and NEWDEV_devType settings).

+
+

Match Behavior (per function)

+

These functions are executed in the following order:

+

match_mac_and_vendor(mac_clean, vendor, ...)

+
    +
  • Looks for MAC prefix and vendor substring match
  • +
  • Most precise
  • +
  • Stops as soon as a match is found
  • +
+

match_vendor(vendor, ...)

+
    +
  • Falls back to substring match on vendor only
  • +
  • Ignores rules where mac_prefix is present (ensures this is really a fallback)
  • +
+

match_name(name, ...)

+
    +
  • Lowercase name is compared against all name_pattern values using regex
  • +
  • Good for user-assigned labels (e.g. "AP Office", "iPhone")
  • +
+

match_ip(ip, ...)

+
    +
  • If IP is present and matches regex patterns under any rule, it returns that type/icon
  • +
  • Usually used for gateways or local IP ranges
  • +
+
+

Icons

+
    +
  • Each rule can define an icon_html, which is converted to a icon_base64 on load
  • +
  • If missing, it falls back to the passed-in default_icon (NEWDEV_devIcon setting)
  • +
  • If a match is found but icon is still blank, default is used
  • +
+

TL;DR: Type and icon must both be matched. If only one is matched, the other falls back to the default.

+
+

Priority Mechanics

+
    +
  • JSON rules are evaluated top-to-bottom
  • +
  • Matching is first-hit wins — no scoring, no weights
  • +
  • Rules that are more specific (e.g. exact MAC prefixes) should be listed earlier
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEVICE_MANAGEMENT/index.html b/DEVICE_MANAGEMENT/index.html new file mode 100644 index 00000000..07cd239e --- /dev/null +++ b/DEVICE_MANAGEMENT/index.html @@ -0,0 +1,4156 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Management - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Device Management

+

The Main Info section is where most of the device identifiable information is stored and edited. Some of the information is autodetected via various plugins. Initial values for most of the fields can be specified in the NEWDEV plugin.

+
+

Note

+

You can multi-edit devices by selecting them in the main Devices view, from the Mainetence section, or via the CSV Export functionality under Maintenance. More info can be found in the Devices Bulk-editing docs.

+
+

Main Info

+

Main Info

+
    +
  • MAC: MAC addres of the device. Not editable, unless creating a new dummy device.
  • +
  • Last IP: IP addres of the device. Not editable, unless creating a new dummy device.
  • +
  • Name: Friendly device name. Autodetected via various 🆎 Name discovery plugins. The app attaches (IP match) if the name is discovered via an IP match and not MAC match which could mean the name could be incorrect as IPs might change.
  • +
  • Icon: Partially autodetected. Select an existing or add a custom icon. You can also auto-apply the same icon on all devices of the same type.
  • +
  • Owner: Device owner (The list is self-populated with existing owners and you can add custom values).
  • +
  • Type: Select a device type from the dropdown list (Smartphone, Tablet, + Laptop, TV, router, etc.) or add a new device type. If you want the device to act as a Network device (and be able to be a network node in the Network view), select a type under Network Devices or add a new Network Device type in Settings. More information can be found in the Network Setup docs.
  • +
  • Vendor: The manufacturing vendor. Automatically updated by NetAlertX when empty or unknown, can be edited.
  • +
  • Group: Select a group (Always on, Personal, Friends, etc.) or type + your own Group name.
  • +
  • Location: Select the location, usually a room, where the device is located (Kitchen, Attic, Living room, etc.) or add a custom Location.
  • +
  • Comments: Add any comments for the device, such as a serial number, or maintenance information.
  • +
+
+

Note

+

Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.

+
+

Dummy devices

+

You can create dummy devices from the Devices listing screen.

+

Create Dummy Device

+

The MAC field and the Last IP field will then become editable.

+

Save Dummy Device

+
+

Note

+

You can couple this with the ICMP plugin which can be used to monitor the status of these devices, if they are actual devices reachable with the ping command. If not, you can use a loopback IP address so they appear online, such as 0.0.0.0 or 127.0.0.1.

+
+

Copying data from an existing device.

+

To speed up device population you can also copy data from an existing device. This can be done from the Tools tab on the Device details.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEV_DEVCONTAINER/index.html b/DEV_DEVCONTAINER/index.html new file mode 100644 index 00000000..3f9b815a --- /dev/null +++ b/DEV_DEVCONTAINER/index.html @@ -0,0 +1,4279 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Devcontainer - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Devcontainer for NetAlertX Guide

+

This devcontainer is designed to mirror the production container environment as closely as possible, while providing a rich set of tools for development.

+

How to Get Started

+
    +
  1. +

    Prerequisites:

    + +
  2. +
  3. +

    Launch the Devcontainer:

    +
      +
    • Clone this repository.
    • +
    • Open the repository folder in VS Code.
    • +
    • A notification will pop up in the bottom-right corner asking to "Reopen in Container". Click it.
    • +
    • VS Code will now build the Docker image and connect your editor to the container. Your terminal, debugger, and all tools will now be running inside this isolated environment.
    • +
    +
  4. +
+

Key Workflows & Features

+

Once you're inside the container, everything is set up for you.

+

1. Services (Frontend & Backend)

+

Services

+

The container's startup script (.devcontainer/scripts/setup.sh) automatically starts the Nginx/PHP frontend and the Python backend. You can restart them at any time using the built-in tasks.

+

2. Integrated Debugging (Just Press F5!)

+

Debugging

+

Debugging for both the Python backend and PHP frontend is pre-configured and ready to go.

+
    +
  • Python Backend (debugpy): The backend automatically starts with a debugger attached on port 5678. Simply open a Python file (e.g., server/__main__.py), set a breakpoint, and press F5 (or select "Python Backend Debug: Attach") to connect the debugger.
  • +
  • PHP Frontend (Xdebug): Xdebug listens on port 9003. In VS Code, start listening for Xdebug connections and use a browser extension (like "Xdebug helper") to start a debugging session for the web UI.
  • +
+

3. Common Tasks (F1 -> Run Task)

+

Common tasks

+

We've created several VS Code Tasks to simplify common operations. Access them by pressing F1 and typing "Tasks: Run Task".

+
    +
  • Generate Dockerfile: This is important. The actual .devcontainer/Dockerfile is auto-generated. If you need to change the container environment, edit .devcontainer/resources/devcontainer-Dockerfile and then run this task.
  • +
  • Re-Run Startup Script: Manually re-runs the .devcontainer/scripts/setup.sh script to re-link files and restart services.
  • +
  • Start Backend (Python) / Start Frontend (nginx and PHP-FPM): Manually restart the services if needed.
  • +
+

4. Running Tests

+

Running tests

+

The environment includes pytest. You can run tests directly from the VS Code Test Explorer UI or by running pytest -q in the integrated terminal. The necessary PYTHONPATH is already configured so that tests can correctly import the server modules.

+

How to Maintain This Devcontainer

+

The setup is designed to be easy to manage. Here are the core principles:

+
    +
  • Don't Edit Dockerfile Directly: The main .devcontainer/Dockerfile is a combination of the project's root Dockerfile and a special dev-only stage. To add new tools or dependencies, edit .devcontainer/resources/devcontainer-Dockerfile and then run the Generate Dockerfile task.
  • +
  • Build-Time vs. Run-Time Setup:
      +
    • For changes that can be baked into the image (like installing a new package with apk add), add them to the resource Dockerfile.
    • +
    • For changes that must happen when the container starts (like creating symlinks, setting permissions, or starting services), use .devcontainer/scripts/setup.sh.
    • +
    +
  • +
  • Project Conventions: The .github/copilot-instructions.md file is an excellent resource to help AI and humans understand the project's architecture, conventions, and how to use existing helper functions instead of hardcoding values.
  • +
+

This setup provides a powerful and consistent foundation for all current and future contributors to NetAlertX.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEV_ENV_SETUP/index.html b/DEV_ENV_SETUP/index.html new file mode 100644 index 00000000..739c2ae4 --- /dev/null +++ b/DEV_ENV_SETUP/index.html @@ -0,0 +1,4537 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Environment Setup - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Development Environment Setup

+

I truly appreciate all contributions! To help keep this project maintainable, this guide provides an overview of project priorities, key design considerations, and overall philosophy. It also includes instructions for setting up your environment so you can start contributing right away.

+

Development Guidelines

+

Before starting development, please review the following guidelines.

+

Priority Order (Highest to Lowest)

+
    +
  1. 🔼 Fixing core bugs that lack workarounds
  2. +
  3. 🔵 Adding core functionality that unlocks other features (e.g., plugins)
  4. +
  5. 🔵 Refactoring to enable faster development
  6. +
  7. 🔽 UI improvements (PRs welcome, but low priority)
  8. +
+

Design Philosophy

+

The application architecture is designed for extensibility and maintainability. It relies heavily on configuration manifests via plugins and settings to dynamically build the UI and populate the application with data from various sources.

+

For details, see:
+- Plugins Development (includes video)
+- Settings System

+

Focus on core functionality and integrate with existing tools rather than reinventing the wheel.

+

Examples:
+- Using Apprise for notifications instead of implementing multiple separate gateways
+- Implementing regex-based validation instead of one-off validation for each setting

+
+

Note

+

UI changes have lower priority. PRs are welcome, but please keep them small and focused.

+
+

Development Environment Set Up

+
+

Tip

+

There is also a ready to use devcontainer available.

+
+

The following steps will guide you to set up your environment for local development and to run a custom docker build on your system. For most changes the container doesn't need to be rebuild which speeds up the development significantly.

+
+

Note

+

Replace /development with the path where your code files will be stored. The default container name is netalertx so there might be a conflict with your running containers.

+
+

1. Download the code:

+
    +
  • mkdir /development
  • +
  • cd /development && git clone https://github.com/jokob-sk/NetAlertX.git
  • +
+

2. Create a DEV .env_dev file

+

touch /development/.env_dev && sudo nano /development/.env_dev

+

The file content should be following, with your custom values.

+
#--------------------------------
+#NETALERTX
+#--------------------------------
+PORT=22222    # make sure this port is unique on your whole network
+DEV_LOCATION=/development/NetAlertX
+APP_DATA_LOCATION=/volume/docker_appdata
+# Make sure your GRAPHQL_PORT setting has a port that is unique on your whole host network
+APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22223"} 
+# ALWAYS_FRESH_INSTALL=true # uncommenting this will always delete the content of /config and /db dirs on boot to simulate a fresh install
+
+

3. Create /db and /config dirs

+

Create a folder netalertx in the APP_DATA_LOCATION (in this example in /volume/docker_appdata) with 2 subfolders db and config.

+
    +
  • mkdir /volume/docker_appdata/netalertx
  • +
  • mkdir /volume/docker_appdata/netalertx/db
  • +
  • mkdir /volume/docker_appdata/netalertx/config
  • +
+

4. Run the container

+
    +
  • cd /development/NetAlertX && sudo docker-compose --env-file ../.env_dev
  • +
+

You can then modify the python script without restarting/rebuilding the container every time. Additionally, you can trigger a plugin run via the UI:

+

image

+

Tips

+

A quick cheat sheet of useful commands.

+

Removing the container and image

+

A command to stop, remove the container and the image (replace netalertx and netalertx-netalertx with the appropriate values)

+
    +
  • sudo docker container stop netalertx ; sudo docker container rm netalertx ; sudo docker image rm netalertx-netalertx
  • +
+

Restart the server backend

+

Most code changes can be tested without rebuilding the container. When working on the python server backend, you only need to restart the server.

+
    +
  1. You can usually restart the backend via Maintenance > Logs > Restart server
  2. +
+

image

+
    +
  1. +

    If above doesn't work, SSH into the container and kill & restart the main script loop

    +
  2. +
  3. +

    sudo docker exec -it netalertx /bin/bash

    +
  4. +
  5. +

    pkill -f "python /app/server" && python /app/server &

    +
  6. +
  7. +

    If none of the above work, restart the docker container.

    +
  8. +
  9. +

    This is usually the last resort as sometimes the Docker engine becomes unresponsive and the whole engine needs to be restarted.

    +
  10. +
+

Contributing & Pull Requests

+

Before submitting a PR, please ensure:

+

✔ Changes are backward-compatible with existing installs.
+✔ No unnecessary changes are made.
+✔ New features are reusable, not narrowly scoped.
+✔ Features are implemented via plugins if possible.

+

Mandatory Test Cases

+
    +
  • Fresh install (no DB/config).
  • +
  • Existing DB/config compatibility.
  • +
  • +

    Notification testing:

    +
      +
    • Email
    • +
    • Apprise (e.g., Telegram)
    • +
    • Webhook (e.g., Discord)
    • +
    • MQTT (e.g., Home Assistant)
    • +
    +
  • +
  • +

    Updating Settings and their persistence.

    +
  • +
  • Updating a Device
  • +
  • Plugin functionality.
  • +
  • Error log inspection.
  • +
+
+

Note

+

Always run all available tests as per the Testing documentation.

+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DEV_PORTS_HOST_MODE/index.html b/DEV_PORTS_HOST_MODE/index.html new file mode 100644 index 00000000..1970d2e8 --- /dev/null +++ b/DEV_PORTS_HOST_MODE/index.html @@ -0,0 +1,4061 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Dev Ports in Host Network Mode - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Dev Ports in Host Network Mode

+

When using "--network=host" in the devcontainer, VS Code's normal port forwarding model doesn't apply. All container ports are already on the host network namespace, so:

+
    +
  • Listing ports in forwardPorts can cause VS Code to pre-bind or reserve them (conflicts with startup scripts waiting for a free port).
  • +
  • The PORTS panel will not auto-detect services reliably, because forwarding isn't occurring.
  • +
  • Debugger ports (e.g. Xdebug 9003, Python debugpy 5678) can still be listed safely.
  • +
+ +
    +
  1. Only include debugger ports in forwardPorts: + jsonc + "forwardPorts": [5678, 9003]
  2. +
  3. Do NOT list application service ports (e.g. 20211, 20212) there when in host mode.
  4. +
  5. Use the helper task to enumerate current bindings:
  6. +
  7. Run task: > Tasks: Run Task[Dev Container] List NetAlertX Ports
  8. +
+

Port Enumeration Script

+

Script: scripts/list-ports.sh +Outputs binding address, PID (if resolvable) and process name for key ports.

+

You can edit the PORTS variable inside that script to add/remove watched ports.

+

Xdebug Notes

+

Set in 99-xdebug.ini:

+
xdebug.client_host=127.0.0.1
+xdebug.client_port=9003
+xdebug.discover_client_host=1
+
+

Ensure your IDE is listening on 9003.

+

Troubleshooting

+ + + + + + + + + + + + + + + + + + + + + + + + + +
SymptomCauseFix
Waiting for port 20211 to free... repeatsVS Code pre-bound the port via forwardPortsRemove the port from forwardPorts, rebuild, retry
PHP request hangs at startXdebug trying to connect to unresolved host (host.docker.internal)Use 127.0.0.1 or rely on discovery
PORTS panel emptyExpected in host modeUse the port enumeration task
+

Future Improvements

+
    +
  • Optional: add a small web status endpoint summarizing runtime ports.
  • +
  • Optional: detect host mode in setup.sh and skip the wait loop if the PID using port is the intended process.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DOCKER_COMPOSE/index.html b/DOCKER_COMPOSE/index.html new file mode 100644 index 00000000..e560a658 --- /dev/null +++ b/DOCKER_COMPOSE/index.html @@ -0,0 +1,4437 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Docker Compose - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

NetAlertX and Docker Compose

+
+

Warning

+

⚠️ Important: The docker-compose has recently changed. Carefully read the Migration guide for detailed instructions.

+
+

Great care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.Good care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.

+
+

Note

+

The container needs to run in network_mode:"host" to access Layer 2 networking such as arp, nmap and others. Due to lack of support for this feature, Windows host is not a supported operating system.

+
+

Baseline Docker Compose

+

There is one baseline for NetAlertX. That's the default security-enabled official distribution.

+
services:
+  netalertx:
+  #use an environmental variable to set host networking mode if needed
+    container_name: netalertx                       # The name when you docker contiainer ls
+    image: ghcr.io/jokob-sk/netalertx-dev:latest
+    network_mode: ${NETALERTX_NETWORK_MODE:-host}   # Use host networking for ARP scanning and other services
+
+    read_only: true                                 # Make the container filesystem read-only
+    cap_drop:                                       # Drop all capabilities for enhanced security
+      - ALL
+    cap_add:                                        # Add only the necessary capabilities
+      - NET_ADMIN                                   # Required for ARP scanning
+      - NET_RAW                                     # Required for raw socket operations
+      - NET_BIND_SERVICE                            # Required to bind to privileged ports (nbtscan)
+
+    volumes:
+      - type: volume                                # Persistent Docker-managed named volume for config + database
+        source: netalertx_data
+        target: /data                               # `/data/config` and `/data/db` live inside this mount
+        read_only: false
+
+    # Example custom local folder called /home/user/netalertx_data
+    # - type: bind
+    #   source: /home/user/netalertx_data
+    #   target: /data
+    #   read_only: false
+    # ... or use the alternative format
+    # - /home/user/netalertx_data:/data:rw
+
+      - type: bind                                  # Bind mount for timezone consistency
+        source: /etc/localtime
+        target: /etc/localtime
+        read_only: true
+
+      # Mount your DHCP server file into NetAlertX for a plugin to access
+      # - path/on/host/to/dhcp.file:/resources/dhcp.file
+
+    # tmpfs mount consolidates writable state for a read-only container and improves performance
+    # uid=20211 and gid=20211 is the netalertx user inside the container
+    # mode=1700 grants rwx------ permissions to the netalertx user only
+    tmpfs:
+      # Comment out to retain logs between container restarts - this has a server performance impact.
+      - "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+
+      # Retain logs - comment out tmpfs /tmp if you want to retain logs between container restarts
+      # Please note if you remove the /tmp mount, you must create and maintain sub-folder mounts.
+      # - /path/on/host/log:/tmp/log
+      # - "/tmp/api:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+      # - "/tmp/nginx:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+      # - "/tmp/run:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+
+    environment:
+      LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0}                   # Listen for connections on all interfaces
+      PORT: ${PORT:-20211}                                   # Application port
+      GRAPHQL_PORT: ${GRAPHQL_PORT:-20212}                   # GraphQL API port (passed into APP_CONF_OVERRIDE at runtime)
+  #    NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0}                 # 0=kill all services and restart if any dies. 1 keeps running dead services.
+
+    # Resource limits to prevent resource exhaustion
+    mem_limit: 2048m            # Maximum memory usage
+    mem_reservation: 1024m      # Soft memory limit
+    cpu_shares: 512             # Relative CPU weight for CPU contention scenarios
+    pids_limit: 512             # Limit the number of processes/threads to prevent fork bombs
+    logging:
+      driver: "json-file"       # Use JSON file logging driver
+      options:
+        max-size: "10m"         # Rotate log files after they reach 10MB
+        max-file: "3"           # Keep a maximum of 3 log files
+
+    # Always restart the container unless explicitly stopped
+    restart: unless-stopped
+
+volumes:                        # Persistent volume for configuration and database storage
+  netalertx_data:
+
+

Run or re-run it:

+
docker compose up --force-recreate
+
+

Customize with Environmental Variables

+

You can override the default settings by passing environmental variables to the docker compose up command.

+

Example using a single variable:

+

This command runs NetAlertX on port 8080 instead of the default 20211.

+
PORT=8080 docker compose up
+
+

Example using all available variables:

+

This command demonstrates overriding all primary environmental variables: running with host networking, on port 20211, GraphQL on 20212, and listening on all IPs.

+
NETALERTX_NETWORK_MODE=host \
+LISTEN_ADDR=0.0.0.0 \
+PORT=20211 \
+GRAPHQL_PORT=20212 \
+NETALERTX_DEBUG=0 \
+docker compose up
+
+

docker-compose.yaml Modifications

+

Modification 1: Use a Local Folder (Bind Mount)

+

By default, the baseline compose file uses a single named volume (netalertx_data) mounted at /data. This single-volume layout is preferred because NetAlertX manages both configuration and the database under /data (for example, /data/config and /data/db) via its web UI. Using one named volume simplifies permissions and portability: Docker manages the storage and NetAlertX manages the files inside /data.

+

A two-volume layout that mounts /data/config and /data/db separately (for example, netalertx_config and netalertx_db) is supported for backward compatibility and some advanced workflows, but it is an abnormal/legacy layout and not recommended for new deployments.

+

However, if you prefer to have direct, file-level access to your configuration for manual editing, a "bind mount" is a simple alternative. This tells Docker to use a specific folder from your computer (the "host") inside the container.

+

How to make the change:

+
    +
  1. +

    Choose a location on your computer. For example, /local_data_dir.

    +
  2. +
  3. +

    Create the subfolders: mkdir -p /local_data_dir/config and mkdir -p /local_data_dir/db.

    +
  4. +
  5. +

    Edit your docker-compose.yml and find the volumes: section (the one inside the netalertx: service).

    +
  6. +
  7. +

    Comment out (add a # in front) or delete the type: volume blocks for netalertx_config and netalertx_db.

    +
  8. +
  9. +

    Add new lines pointing to your local folders.

    +
  10. +
+

Before (Using Named Volumes - Preferred):

+
...
+    volumes:
+      - netalertx_config:/data/config:rw #short-form volume (no /path is a short volume)
+      - netalertx_db:/data/db:rw
+...
+
+

After (Using a Local Folder / Bind Mount): +Make sure to replace /local_data_dir with your actual path. The format is <path_on_your_computer>:<path_inside_container>:<options>.

+
...
+    volumes:
+#      - netalertx_config:/data/config:rw
+#      - netalertx_db:/data/db:rw
+      - /local_data_dir/config:/data/config:rw
+      - /local_data_dir/db:/data/db:rw
+...
+
+

Now, any files created by NetAlertX in /data/config will appear in your /local_data_dir/config folder.

+

This same method works for mounting other things, like custom plugins or enterprise NGINX files, as shown in the commented-out examples in the baseline file.

+

Example Configuration Summaries

+

Here are the essential modifications for common alternative setups.

+

Example 2: External .env File for Paths

+

This method is useful for keeping your paths and other settings separate from your main compose file, making it more portable.

+

docker-compose.yml changes:

+
...
+services:
+  netalertx:
+    environment:
+      - PORT=${PORT}
+      - GRAPHQL_PORT=${GRAPHQL_PORT}
+
+...
+
+

.env file contents:

+
PORT=20211
+NETALERTX_NETWORK_MODE=host
+LISTEN_ADDR=0.0.0.0
+GRAPHQL_PORT=20212
+
+

Run with: sudo docker-compose --env-file /path/to/.env up

+

Example 3: Docker Swarm

+

This is for deploying on a Docker Swarm cluster. The key differences from the baseline are the removal of network_mode: from the service, and the addition of deploy: and networks: blocks at both the service and top-level.

+

Here are the only changes you need to make to the baseline compose file to make it Swarm-compatible.

+
services:
+  netalertx:
+    ...
+    #    network_mode: ${NETALERTX_NETWORK_MODE:-host} # <-- DELETE THIS LINE
+    ...
+
+    # 2. ADD a 'networks:' block INSIDE the service to connect to the external host network.
+    networks:
+      - outside
+    # 3. ADD a 'deploy:' block to manage the service as a swarm replica.
+    deploy:
+      mode: replicated
+      replicas: 1
+      restart_policy:
+        condition: on-failure
+
+
+# 4. ADD a new top-level 'networks:' block at the end of the file to define 'outside' as the external 'host' network.
+networks:
+  outside:
+    external:
+      name: "host"
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DOCKER_INSTALLATION/index.html b/DOCKER_INSTALLATION/index.html new file mode 100644 index 00000000..4595aef9 --- /dev/null +++ b/DOCKER_INSTALLATION/index.html @@ -0,0 +1,4533 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Docker Guide - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Docker Size +Docker Pulls +GitHub Release +Discord +Home Assistant

+

NetAlertX - Network scanner & notification framework

+ + + + + + + + + + + + + + + + + + + +
📑 Docker guide🚀 Releases📚 Docs🔌 Plugins🤖 Ask AI
+

+ +

+

Head to https://netalertx.com/ for more gifs and screenshots 📷.

+
+

Note

+

There is also an experimental 🧪 bare-metal install method available.

+
+

📕 Basic Usage

+
+

Warning

+

You will have to run the container on the host network and specify SCAN_SUBNETS unless you use other plugin scanners. The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.

+
+
docker run -d --rm --network=host \
+  -v /local_data_dir:/data \
+  -v /etc/localtime:/etc/localtime \
+  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
+  -e PORT=20211 \
+  -e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"} \
+  ghcr.io/jokob-sk/netalertx:latest
+
+

See alternative docked-compose examples.

+

Default ports

+ + + + + + + + + + + + + + + + + + + + +
DefaultDescriptionHow to override
20211Port of the web interface-e PORT=20222
20212Port of the backend API server-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"} or via the GRAPHQL_PORT Setting
+

Docker environment variables

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
VariableDescriptionExample Value
PORTPort of the web interface20211
LISTEN_ADDRSet the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks.0.0.0.0
LOADED_PLUGINSDefault plugins to load. Plugins cannot be loaded with APP_CONF_OVERRIDE, you need to use this variable instead and then specify the plugins settings with APP_CONF_OVERRIDE.["PIHOLE","ASUSWRT"]
APP_CONF_OVERRIDEJSON override for settings (except LOADED_PLUGINS).{"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20212"}
ALWAYS_FRESH_INSTALL⚠ If true will delete the content of the /db & /config folders. For testing purposes. Can be coupled with watchtower to have an always freshly installed netalertx/netalertx-dev image.true
+
+

You can override the default GraphQL port setting GRAPHQL_PORT (set to 20212) by using the APP_CONF_OVERRIDE env variable. LOADED_PLUGINS and settings in APP_CONF_OVERRIDE can be specified via the UI as well.

+
+

Docker paths

+
+

Note

+

See also Backup strategies.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
RequiredPathDescription
:/dataFolder which needs to contain a /db and /config sub-folders.
/etc/localtime:/etc/localtime:roEnsuring the timezone is the same as on the server.
:/tmp/logLogs folder useful for debugging if you have issues setting up the container
:/tmp/apiThe API endpoint containing static (but regularly updated) json and other files. Path configurable via NETALERTX_API environment variable.
:/app/front/plugins/<plugin>/ignore_pluginMap a file ignore_plugin to ignore a plugin. Plugins can be soft-disabled via settings. More in the Plugin docs.
:/etc/resolv.confUse a custom resolv.conf file for better name resolution.
+

Folder structure

+

Use separate db and config directories, do not nest them:

+
data
+├── config
+└── db
+
+

Permissions

+

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

+
sudo chown -R 20211:20211 /local_data_dir
+sudo chmod -R a+rwx /local_data_dir
+
+

Initial setup

+
    +
  • If unavailable, the app generates a default app.conf and app.db file on the first run.
  • +
  • The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify app.conf in the /data/config/ folder directly
  • +
+

Setting up scanners

+

You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default ARPSCAN plugin, you have to specify at least one valid subnet and interface in the SCAN_SUBNETS setting. See the documentation on How to set up multiple SUBNETS, VLANs and what are limitations for troubleshooting and more advanced scenarios.

+

If you are running PiHole you can synchronize devices directly. Check the PiHole configuration guide for details.

+
+

Note

+

You can bulk-import devices via the CSV import method.

+
+

Community guides

+

You can read or watch several community configuration guides in Chinese, Korean, German, or French.

+
+

Please note these might be outdated. Rely on official documentation first.

+
+

Common issues

+ +

💙 Support me

+ + + + + + + + + + + + + + + +
GitHubBuy Me A CoffeePatreon
+
    +
  • Bitcoin: 1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM
  • +
  • Ethereum: 0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7
  • +
+
+

📧 Email me at netalertx@gmail.com if you want to get in touch or if I should add other sponsorship platforms.

+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DOCKER_MAINTENANCE/index.html b/DOCKER_MAINTENANCE/index.html new file mode 100644 index 00000000..61f185ad --- /dev/null +++ b/DOCKER_MAINTENANCE/index.html @@ -0,0 +1,4622 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Docker Maintenance - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

The NetAlertX Container Operator's Guide

+
+

Warning

+

⚠️ Important: The docker-compose has recently changed. Carefully read the Migration guide for detailed instructions.

+
+

This guide assumes you are starting with the official docker-compose.yml file provided with the project. We strongly recommend you start with or migrate to this file as your baseline and modify it to suit your specific needs (e.g., changing file paths). While there are many ways to configure NetAlertX, the default file is designed to meet the mandatory security baseline with layer-2 networking capabilities while operating securely and without startup warnings.

+

This guide provides direct, concise solutions for common NetAlertX administrative tasks. It is structured to help you identify a problem, implement the solution, and understand the details.

+

Guide Contents

+
    +
  • Using a Local Folder for Configuration
  • +
  • Migrating from a Local Folder to a Docker Volume
  • +
  • Applying a Custom Nginx Configuration
  • +
  • Mounting Additional Files for Plugins
  • +
+
+

Note

+

Other relevant resources + - Fixing Permission Issues + - Handling Backups + - Accessing Application Logs

+
+
+

Task: Using a Local Folder for Configuration

+

Problem

+

You want to edit your app.conf and other configuration files directly from your host machine, instead of using a Docker-managed volume.

+

Solution

+
    +
  1. Stop the container:
  2. +
+

bash + docker-compose down +2. (Optional but Recommended) Back up your data using the method in Part 1. +3. Create a local folder on your host machine (e.g., /data/netalertx_config). +4. Edit docker-compose.yml:

+
    +
  • Comment out the netalertx_config volume entry.
  • +
  • Uncomment and set the path for the "Example custom local folder" bind mount entry.
  • +
+

yaml + ... + volumes: + # - type: volume + # source: netalertx_config + # target: /data/config + # read_only: false + ... + # Example custom local folder called /data/netalertx_config + - type: bind + source: /data/netalertx_config + target: /data/config + read_only: false + ... +5. (Optional) Restore your backup. +6. Restart the container:

+

bash + docker-compose up -d

+

About This Method

+

This replaces the Docker-managed volume with a "bind mount." This is a direct mapping between a folder on your host computer (/data/netalertx_config) and a folder inside the container (/data/config), allowing you to edit the files directly.

+
+

Task: Migrating from a Local Folder to a Docker Volume

+

Problem

+

You are currently using a local folder (bind mount) for your configuration (e.g., /data/netalertx_config) and want to switch to the recommended Docker-managed volume (netalertx_config).

+

Solution

+
    +
  1. Stop the container:
  2. +
+

bash + docker-compose down +2. Edit docker-compose.yml:

+
    +
  • Comment out the bind mount entry for your local folder.
  • +
  • Uncomment the netalertx_config volume entry.
  • +
+

yaml + ... + volumes: + - type: volume + source: netalertx_config + target: /data/config + read_only: false + ... + # Example custom local folder called /data/netalertx_config + # - type: bind + # source: /data/netalertx_config + # target: /data/config + # read_only: false + ... +3. (Optional) Initialize the volume:

+

bash + docker-compose up -d && docker-compose down +4. Run the migration command (replace /data/netalertx_config with your actual path):

+

bash + docker run --rm -v netalertx_config:/config -v /data/netalertx_config:/local-config alpine \ + sh -c "tar -C /local-config -c . | tar -C /config -x" +5. Restart the container:

+

bash + docker-compose up -d

+

About This Method

+

This uses a temporary alpine container that mounts both your source folder (/local-config) and destination volume (/config). The tar ... | tar ... command safely copies all files, including hidden ones, preserving structure.

+
+

Task: Applying a Custom Nginx Configuration

+

Problem

+

You need to override the default Nginx configuration to add features like LDAP, SSO, or custom SSL settings.

+

Solution

+
    +
  1. Stop the container:
  2. +
+

bash + docker-compose down +2. Create your custom config file on your host (e.g., /data/my-netalertx.conf). +3. Edit docker-compose.yml:

+

yaml + ... + # Use a custom Enterprise-configured nginx config for ldap or other settings + - /data/my-netalertx.conf:/tmp/nginx/active-config/netalertx.conf:ro + ... +4. Restart the container:

+

bash + docker-compose up -d

+

About This Method

+

Docker’s bind mount overlays your host file (my-netalertx.conf) on top of the default file inside the container. The container remains read-only, but Nginx reads your file as if it were the default.

+
+

Task: Mounting Additional Files for Plugins

+

Problem

+

A plugin (like DHCPLSS) needs to read a file from your host machine (e.g., /var/lib/dhcp/dhcpd.leases).

+

Solution

+
    +
  1. Stop the container:
  2. +
+

bash + docker-compose down +2. Edit docker-compose.yml and add a new line under the volumes: section:

+

yaml + ... + volumes: + ... + # Mount for DHCPLSS plugin + - /var/lib/dhcp/dhcpd.leases:/mnt/dhcpd.leases:ro + ... +3. Restart the container:

+

bash + docker-compose up -d +4. In the NetAlertX web UI, configure the plugin to read from:

+

/mnt/dhcpd.leases

+

About This Method

+

This maps your host file to a new, read-only (:ro) location inside the container. The plugin can then safely read this file without exposing anything else on your host filesystem.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DOCKER_PORTAINER/index.html b/DOCKER_PORTAINER/index.html new file mode 100644 index 00000000..4b9a2d76 --- /dev/null +++ b/DOCKER_PORTAINER/index.html @@ -0,0 +1,4287 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Portainer Stacks - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Deploying NetAlertX in Portainer (via Stacks)

+

This guide shows you how to set up NetAlertX using Portainer’s Stacks feature.

+

Portainer > Stacks

+
+

1. Prepare Your Host

+

Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace APP_FOLDER with your preferred location, for example /local_data_dir here:

+
mkdir -p /local_data_dir/netalertx/config
+mkdir -p /local_data_dir/netalertx/db
+mkdir -p /local_data_dir/netalertx/log
+
+
+

2. Open Portainer Stacks

+
    +
  1. Log in to your Portainer UI.
  2. +
  3. Navigate to StacksAdd stack.
  4. +
  5. Give your stack a name (e.g., netalertx).
  6. +
+
+

3. Paste the Stack Configuration

+

Copy and paste the following YAML into the Web editor:

+
services:
+  netalertx:
+    container_name: netalertx
+    # Use this line for stable release
+    image: "ghcr.io/jokob-sk/netalertx:latest"
+    # Or, use this for the latest development build
+    # image: "ghcr.io/jokob-sk/netalertx-dev:latest"
+    network_mode: "host"
+    restart: unless-stopped
+    cap_drop:       # Drop all capabilities for enhanced security
+      - ALL
+    cap_add:        # Re-add necessary capabilities
+      - NET_RAW
+      - NET_ADMIN
+      - NET_BIND_SERVICE
+    volumes:
+      - ${APP_FOLDER}/netalertx/config:/data/config
+      - ${APP_FOLDER}/netalertx/db:/data/db
+      # to sync with system time
+      - /etc/localtime:/etc/localtime:ro
+    tmpfs:
+      # All writable runtime state resides under /tmp; comment out to persist logs between restarts
+      - "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+    environment:
+      - PORT=${PORT}
+      - APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}
+
+
+

4. Configure Environment Variables

+

In the Environment variables section of Portainer, add the following:

+
    +
  • APP_FOLDER=/local_data_dir (or wherever you created the directories in step 1)
  • +
  • PORT=22022 (or another port if needed)
  • +
  • APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22023"} (optional advanced settings, otherwise the backend API server PORT defaults to 20212)
  • +
+
+

5. Ensure permissions

+
+

Tip

+

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

+

sudo chown -R 20211:20211 /local_data_dir

+

sudo chmod -R a+rwx /local_data_dir

+
+
+

6. Deploy the Stack

+
    +
  1. Scroll down and click Deploy the stack.
  2. +
  3. Portainer will pull the image and start NetAlertX.
  4. +
  5. Once running, access the app at:
  6. +
+
http://<your-docker-host-ip>:22022
+
+
+

7. Verify and Troubleshoot

+
    +
  • Check logs via Portainer → ContainersnetalertxLogs.
  • +
  • Logs are stored under ${APP_FOLDER}/netalertx/log if you enabled that volume.
  • +
+

Once the application is running, configure it by reading the initial setup guide, or troubleshoot common issues.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/DOCKER_SWARM/index.html b/DOCKER_SWARM/index.html new file mode 100644 index 00000000..9a379f4e --- /dev/null +++ b/DOCKER_SWARM/index.html @@ -0,0 +1,4200 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Docker Swarm - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Docker Swarm Deployment Guide (IPvlan)

+

This guide describes how to deploy NetAlertX in a Docker Swarm environment using an ipvlan network. This enables the container to receive a LAN IP address directly, which is ideal for network monitoring.

+
+

⚙️ Step 1: Create an IPvlan Config-Only Network on All Nodes

+
+

Run this command on each node in the Swarm.

+
+
docker network create -d ipvlan \
+  --subnet=192.168.1.0/24 \              # 🔧 Replace with your LAN subnet
+  --gateway=192.168.1.1 \                # 🔧 Replace with your LAN gateway
+  -o ipvlan_mode=l2 \
+  -o parent=eno1 \                       # 🔧 Replace with your network interface (e.g., eth0, eno1)
+  --config-only \
+  ipvlan-swarm-config
+
+
+

🖥️ Step 2: Create the Swarm-Scoped IPvlan Network (One-Time Setup)

+
+

Run this on one Swarm manager node only.

+
+
docker network create -d ipvlan \
+  --scope swarm \
+  --config-from ipvlan-swarm-config \
+  swarm-ipvlan
+
+
+

🧾 Step 3: Deploy NetAlertX with Docker Compose

+

Use the following Compose snippet to deploy NetAlertX with a static LAN IP assigned via the swarm-ipvlan network.

+
services:
+  netalertx:
+    image: ghcr.io/jokob-sk/netalertx:latest
+...
+    networks:
+      swarm-ipvlan:
+        ipv4_address: 192.168.1.240     # ⚠️ Choose a free IP from your LAN
+    deploy:
+      mode: replicated
+      replicas: 1
+      restart_policy:
+        condition: on-failure
+      placement:
+        constraints:
+          - node.role == manager        # 🔄 Or use: node.labels.netalertx == true
+
+networks:
+  swarm-ipvlan:
+    external: true
+
+
+

✅ Notes

+
    +
  • The ipvlan setup allows NetAlertX to have a direct IP on your LAN.
  • +
  • Replace eno1 with your interface, IP addresses, and volume paths to match your environment.
  • +
  • Make sure the assigned IP (192.168.1.240 above) is not in use or managed by DHCP.
  • +
  • You may also use a node label constraint instead of node.role == manager for more control.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/FILE_PERMISSIONS/index.html b/FILE_PERMISSIONS/index.html new file mode 100644 index 00000000..345610c6 --- /dev/null +++ b/FILE_PERMISSIONS/index.html @@ -0,0 +1,4251 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Docker File Permissions - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Managing File Permissions for NetAlertX on a Read-Only Container

+

Sometimes, permission issues arise if your existing host directories were created by a previous container running as root or another UID. The container will fail to start with "Permission Denied" errors.

+
+

Tip

+

NetAlertX runs in a secure, read-only Alpine-based container under a dedicated netalertx user (UID 20211, GID 20211). All writable paths are either mounted as persistent volumes or tmpfs filesystems. This ensures consistent file ownership and prevents privilege escalation.

+
+

Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server.

+
docker run --rm --network=host \
+  -v /etc/localtime:/etc/localtime:ro \
+  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
+  -e PORT=20211 \
+  ghcr.io/jokob-sk/netalertx:latest
+
+
+

Warning

+

The above should be only used as a test - once the container restarts, all data is lost.

+
+
+

Writable Paths

+

NetAlertX requires certain paths to be writable at runtime. These paths should be mounted either as host volumes or tmpfs in your docker-compose.yml or docker run command:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PathPurposeNotes
/data/configApplication configurationPersistent volume recommended
/data/dbDatabase filesPersistent volume recommended
/tmp/logLogsLives under /tmp; optional host bind to retain logs
/tmp/apiAPI cacheSubdirectory of /tmp
/tmp/nginx/active-configActive nginx configuration overrideMount /tmp (or override specific file)
/tmp/runRuntime directories for nginx & PHPSubdirectory of /tmp
/tmpPHP session save directoryBacked by tmpfs for runtime writes
+
+

Mounting /tmp as tmpfs automatically covers all of its subdirectories (log, api, run, nginx/active-config, etc.).

+

All these paths will have UID 20211 / GID 20211 inside the container. Files on the host will appear owned by 20211:20211.

+
+
+

Solution

+
    +
  1. Run the container once as root (--user "0") to allow it to correct permissions automatically:
  2. +
+
docker run -it --rm --name netalertx --user "0" \
+  -v /local_data_dir:/data \
+  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
+  ghcr.io/jokob-sk/netalertx:latest
+
+
    +
  1. Wait for logs showing permissions being fixed. The container will then hang intentionally.
  2. +
  3. Press Ctrl+C to stop the container.
  4. +
  5. Start the container normally with your docker-compose.yml or docker run command.
  6. +
+
+

The container startup script detects root and runs chown -R 20211:20211 on all volumes, fixing ownership for the secure netalertx user.

+
+
+

Tip

+

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

+

sudo chown -R 20211:20211 /local_data_dir

+

sudo chmod -R a+rwx /local_data_dir

+
+
+

Example: docker-compose.yml with tmpfs

+
services:
+  netalertx:
+    container_name: netalertx
+    image: "ghcr.io/jokob-sk/netalertx"
+    network_mode: "host"
+    cap_drop:                                       # Drop all capabilities for enhanced security
+      - ALL
+    cap_add:                                        # Add only the necessary capabilities
+      - NET_ADMIN                                   # Required for ARP scanning
+      - NET_RAW                                     # Required for raw socket operations
+      - NET_BIND_SERVICE                            # Required to bind to privileged ports (nbtscan)
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir:/data
+      - /etc/localtime:/etc/localtime
+    environment:
+      - PORT=20211
+    tmpfs:
+      - "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+
+
+

This setup ensures all writable paths are either in tmpfs or host-mounted, and the container never writes outside of controlled volumes.

+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/FIX_OFFLINE_DETECTION/index.html b/FIX_OFFLINE_DETECTION/index.html new file mode 100644 index 00000000..a7aec5c9 --- /dev/null +++ b/FIX_OFFLINE_DETECTION/index.html @@ -0,0 +1,4328 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Incorrect Offline Detection - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Troubleshooting: Devices Show Offline When They Are Online

+

In some network setups, certain devices may intermittently appear as offline in NetAlertX, even though they are connected and responsive. This issue is often more noticeable with devices that have higher IP addresses within the subnet.

+
+

Note

+

Network presence graph showing increased drop outs before enabling additional ICMP scans and continuous online presence after following this guide. This graph shows a sudden spike in drop outs probably caused by a device software update. +before after presence

+
+

Symptoms

+
    +
  • Devices sporadically show as offline in the presence timeline.
  • +
  • This behavior often affects devices with higher IPs (e.g., 192.168.1.240+).
  • +
  • Presence data appears inconsistent or unreliable despite the device being online.
  • +
+

Cause

+

This issue is typically related to scanning limitations:

+
    +
  • ARP scan timeouts may prevent full subnet coverage.
  • +
  • +

    Sole reliance on ARP can result in missed detections:

    +
  • +
  • +

    Some devices (like iPhones) suppress or reject frequent ARP requests.

    +
  • +
  • +

    ARP responses may be blocked or delayed due to power-saving features or OS behavior.

    +
  • +
  • +

    Scanning frequency conflicts, where devices ignore repeated scans within a short period.

    +
  • +
+ +

To improve presence accuracy and reduce false offline states:

+

✅ Increase ARP Scan Timeout

+

Extend the ARP scanner timeout and DURATION to ensure full subnet coverage:

+
ARPSCAN_RUN_TIMEOUT=360
+ARPSCAN_DURATION=30
+
+
+

Adjust based on your network size and device count.

+
+

✅ Add ICMP (Ping) Scanning

+

Enable the ICMP scan plugin to complement ARP detection. ICMP is often more reliable for detecting active hosts, especially when ARP fails.

+

✅ Use Multiple Detection Methods

+

A combined approach greatly improves detection robustness:

+
    +
  • ARPSCAN (default)
  • +
  • ICMP (ping)
  • +
  • NMAPDEV (nmap)
  • +
+

This hybrid strategy increases reliability, especially for down detection and alerting. See other plugins that might be compatible with your setup. See benefits and drawbacks of individual scan methods in their respective docs.

+

Results

+

After increasing the ARP timeout and adding ICMP scanning (on select IP ranges), users typically report:

+
    +
  • More consistent presence graphs
  • +
  • Fewer false offline events
  • +
  • Better coverage across all IP ranges
  • +
+

Summary

+ + + + + + + + + + + + + + + + + + + + + +
SettingRecommendation
ARPSCAN_RUN_TIMEOUTIncrease to ensure scans reach all IPs
ICMP ScanEnable to detect devices ARP might miss
Multi-method ScanningUse a mix of ARP, ICMP, and NMAP-based methods
+
+

Tip: Each environment is unique. Consider fine-tuning scan settings based on your network size, device behavior, and desired detection accuracy.

+

Let us know in the NetAlertX Discussions if you have further feedback or edge cases.

+

See also Remote Networks for more advanced setups.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/FRONTEND_DEVELOPMENT/index.html b/FRONTEND_DEVELOPMENT/index.html new file mode 100644 index 00000000..fbda22b8 --- /dev/null +++ b/FRONTEND_DEVELOPMENT/index.html @@ -0,0 +1,4138 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Frontend Development - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Frontend development

+

This page contains tips for frontend development when extending NetAlertX. Guiding principles are:

+
    +
  1. Maintainability
  2. +
  3. Extendability
  4. +
  5. Reusability
  6. +
  7. Placing more functionality into Plugins and enhancing core Plugins functionality
  8. +
+

That means that, when writing code, focus on reusing what's available instead of writing quick fixes. Or creating reusable functions, instead of bespoke functionaility.

+

🔍 Examples

+

Some examples how to apply the above:

+
+

Example 1

+

I want to implement a scan fucntion. Options would be:

+
    +
  1. To add a manual scan functionality to the deviceDetails.php page.
  2. +
  3. To create a separate page that handles the execution of the scan.
  4. +
  5. To create a configurable Plugin.
  6. +
+

From the above, number 3 would be the most appropriate solution. Then followed by number 2. Number 1 would be approved only in special circumstances.

+

Example 2

+

I want to change the behavior of the application. Options to implement this could be:

+
    +
  1. Hard-code the changes in the code.
  2. +
  3. Implement the changes and add settings to influence the behavior in the initialize.py file so the user can adjust these.
  4. +
  5. Implement the changes and add settings via a setting-only plugin.
  6. +
  7. Implement the changes in a way so the behavior can be toggled on each plugin so the core capabilities of Plugins get extended.
  8. +
+

From the above, number 4 would be the most appropriate solution. Then followed by number 3. Number 1 or 2 would be approved only in special circumstances.

+
+

💡 Frontend tips

+

Some useful frontend JavaScript functions:

+
    +
  • getDevDataByMac(macAddress, devicesColumn) - method to retrieve any device data (database column) based on MAC address in the frontend
  • +
  • getString(string stringKey) - method to retrieve translated strings in the frontend
  • +
  • getSetting(string stringKey) - method to retrieve settings in the frontend
  • +
+

Check the common.js file for more frontend functions.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/HELPER_SCRIPTS/index.html b/HELPER_SCRIPTS/index.html new file mode 100644 index 00000000..05c0e360 --- /dev/null +++ b/HELPER_SCRIPTS/index.html @@ -0,0 +1,4126 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Helper scripts - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Community Helper Scripts Overview

+

This page provides an overview of community-contributed scripts for NetAlertX. These scripts are not actively maintained and are provided as-is.

+

Community Scripts

+

You can find all scripts in this scripts GitHub folder.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Script NameDescriptionAuthorVersionRelease Date
New Devices Checkmk ScriptChecks for new devices in NetAlertX and reports status to Checkmk.N/A1.008-Jan-2025
DB Cleanup ScriptQueries and removes old device-related entries from the database.laxduke1.023-Dec-2024
OPNsense DHCP Lease ConverterRetrieves DHCP lease data from OPNsense and converts it to dnsmasq format.im-redactd1.024-Feb-2025
+

Important Notes

+
+

Note

+

These scripts are community-supplied and not actively maintained. Use at your own discretion.

+
+

For detailed usage instructions, refer to each script's documentation in each scripts GitHub folder.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/HOME_ASSISTANT/index.html b/HOME_ASSISTANT/index.html new file mode 100644 index 00000000..fd0f70a9 --- /dev/null +++ b/HOME_ASSISTANT/index.html @@ -0,0 +1,4258 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Home Assistant - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+ +
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Home Assistant integration overview

+

NetAlertX comes with MQTT support, allowing you to show all detected devices as devices in Home Assistant. It also supplies a collection of stats, such as number of online devices.

+
+

Tip

+

You can install NetAlertX also as a Home Assistant addon Home Assistant via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

+
+

⚠ Note

+
    +
  • Please note that discovery takes about ~10s per device.
  • +
  • Deleting of devices is not handled automatically. Please use MQTT Explorer to delete devices in the broker (Home Assistant), if needed.
  • +
  • For optimization reasons, the devices are not always fully synchronized. You can delete Plugin objects as described in the MQTT plugin docs to force a full synchronization.
  • +
+

🧭 Guide

+
+

💡 This guide was tested only with the Mosquitto MQTT broker

+
+
    +
  1. +

    Enable Mosquitto MQTT in Home Assistant by following the documentation

    +
  2. +
  3. +

    Configure a user name and password on your broker.

    +
  4. +
  5. +

    Note down the following details that you will need to configure NetAlertX:

    +
      +
    • MQTT host url (usually your Home Assistant IP)
    • +
    • MQTT broker port
    • +
    • User
    • +
    • Password
    • +
    +
  6. +
  7. +

    Open the NetAlertX > Settings > MQTT settings group

    +
      +
    • Enable MQTT
    • +
    • Fill in the details from above
    • +
    • Fill in remaining settings as per description
    • +
    • set MQTT_RUN to schedule or on_notification depending on requirements
    • +
    +
  8. +
+

Configuration Example

+

📷 Screenshots

+ + + + + + + + + + + + + +
Screen 1Screen 2
Screen 3Screen 4
+

Troubleshooting

+

If you can't see all devices detected, run sudo arp-scan --interface=eth0 192.168.1.0/24 (change these based on your setup, read Subnets docs for details). This command has to be executed the NetAlertX container, not in the Home Assistant container.

+

You can access the NetAlertX container via Portainer on your host or via ssh. The container name will be something like addon_db21ed7f_netalertx (you can copy the db21ed7f_netalertx part from the browser when accessing the UI of NetAlertX).

+

Accessing the NetAlertX container via SSH

+
    +
  1. Log into your Home Assistant host via SSH
  2. +
+
local@local:~ $ ssh pi@192.168.1.9
+
+
    +
  1. Find the NetAlertX container name, in this case addon_db21ed7f_netalertx
  2. +
+
pi@raspberrypi:~ $ sudo docker container ls | grep netalertx
+06c540d97f67   ghcr.io/alexbelgium/netalertx-armv7:25.3.1                   "/init"               6 days ago      Up 6 days (healthy)    addon_db21ed7f_netalertx
+
+
    +
  1. SSH into the NetAlertX cointainer
  2. +
+
pi@raspberrypi:~ $ sudo docker exec -it addon_db21ed7f_netalertx  /bin/sh
+/ #
+
+
    +
  1. Execute a test asrp-scan scan
  2. +
+
/ # sudo arp-scan --ignoredups --retry=6 192.168.1.0/24 --interface=eth0
+Interface: eth0, type: EN10MB, MAC: dc:a6:32:73:8a:b1, IPv4: 192.168.1.9
+Starting arp-scan 1.10.0 with 256 hosts (https://github.com/royhills/arp-scan)
+192.168.1.1     74:ac:b9:54:09:fb       Ubiquiti Networks Inc.
+192.168.1.21    74:ac:b9:ad:c3:30       Ubiquiti Networks Inc.
+192.168.1.58    1c:69:7a:a2:34:7b       EliteGroup Computer Systems Co., LTD
+192.168.1.57    f4:92:bf:a3:f3:56       Ubiquiti Networks Inc.
+...
+
+

If your result doesn't contain results similar to the above, double check your subnet, interface and if you are dealing with an inaccessible network segment, read the Remote networks documentation.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/HW_INSTALL/index.html b/HW_INSTALL/index.html new file mode 100644 index 00000000..65a2578c --- /dev/null +++ b/HW_INSTALL/index.html @@ -0,0 +1,4358 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Bare-metal (Experimental) - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

How to install NetAlertX on the server hardware

+

To download and install NetAlertX on the hardware/server directly use the curl or wget commands at the bottom of this page.

+
+

Note

+

This is an Experimental feature 🧪 and it relies on community support.

+

🙏 Looking for maintainers for this installation method 🙂 Current community volunteers: + - slammingprogramming + - ingoratsdorf

+

There is no guarantee that the install script or any other script will gracefully handle other installed software. +Data loss is a possibility, it is recommended to install NetAlertX using the supplied Docker image.

+
+
+

Warning

+

A warning to the installation method below: Piping to bash is controversial and may be dangerous, as you cannot see the code that's about to be executed on your system.

+
+

If you trust this repo, you can download the install script via one of the methods (curl/wget) below and it will fo its best to install NetAlertX on your system.

+

Alternatively you can download the installation script from the repository and check the code yourself.

+

NetAlertX will be installed in /app and run on port number 20211.

+

Some facts about what and where something will be changed/installed by the HW install setup (may not contain everything!):

+
    +
  • dependencies will be installed from the respective system repos
  • +
  • required python modules will be installed
  • +
  • /app directory will be deleted and newly created
  • +
  • /app will contain the whole repository (downloaded by the install script)
  • +
  • The default NGINX site /etc/nginx/sites-enabled/default will be disabled (sym-link deleted or backed up to sites-available)
  • +
  • /var/www/html/netalertx directory will be deleted and newly created
  • +
  • /etc/nginx/conf.d/netalertx.conf will be sym-linked to the appropriate installer location (depending on your system installer script)
  • +
  • Some files (IEEE device vendors info, ...) will be created in the directory where the installation script is executed
  • +
+

Limitations

+
    +
  • No system service is provided. NetAlertX must be started using /app/install/<system>/start.<system>.sh.
  • +
  • No checks for other running software is done.
  • +
  • Only tested to work on the system listed in the install directory.
  • +
  • EXPERIMENTAL and not recommended way to install NetAlertX.
  • +
+
+

Tip

+

If the below fails try grabbing and installing one of the previous releases and run the installation from the zip package.

+
+

These commands will download the install.debian12.sh script from the GitHub repository, make it executable with chmod, and then run it using ./install.debian12.sh.

+

Make sure you have the necessary permissions to execute the script.

+

📥 Debian 12 (Bookworm)

+

Installation via curl

+
curl -o install.debian12.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh
+
+

Installation via wget

+
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh -O install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh
+
+

📥 Ubuntu 24 (Noble Numbat)

+
+

Note

+

Maintained by ingoratsdorf

+
+

Installation via curl

+
curl -o install.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh && sudo chmod +x install.sh && sudo ./install.sh
+
+

Installation via wget

+
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh -O install.sh && sudo chmod +x install.sh && sudo ./install.sh
+
+

📥 Bare Metal - Proxmox

+
+

Note

+

Use this on a clean LXC/VM for Debian 13 OR Ubuntu 24. +The Scipt will detect OS and build acordingly. +Maintained by JVKeller

+
+

Installation via wget

+
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/ICONS/index.html b/ICONS/index.html new file mode 100644 index 00000000..b37b46b6 --- /dev/null +++ b/ICONS/index.html @@ -0,0 +1,4190 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Icons - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Icons

+ +

Icons overview

+

Icons are used to visually distinguish devices in the app in most of the device listing tables and the network tree.

+

Raspberry Pi with a brand icon

+

Icons Support

+

Two types of icons are suported:

+ +

You can assign icons individually on each device in the Details tab.

+

Adding new icons

+
    +
  1. You can get an SVG or a Font awesome HTML code
  2. +
+

Copying the SVG (for example from iconify.design):

+

iconify svg

+

Copying the HTML code from Font Awesome.

+

Font awesome

+
    +
  1. Navigate to the device you want to use the icon on and click the "+" icon:
  2. +
+

preview

+
    +
  1. Paste in the copied HTML or SVG code and click "OK":
  2. +
+

Paste SVG

+
    +
  1. "Save" the device
  2. +
+
+

Note

+

If you want to mass-apply an icon to all devices of the same device type (Field: Type), you can click the mass-copy button (next to the "+" button). A confirmation prompt is displayed. If you proceed, icons of all devices set to the same device type as the current device, will be overwritten with the current device's icon.

+
+
    +
  • The dropdown contains all icons already used in the app for device icons. You might need to navigate away or refresh the page once you add a new icon.
  • +
+

Font Awesome Pro icons

+

If you own the premium package of Font Awesome icons you can mount it in your Docker container the following way:

+
/font-awesome:/app/front/lib/font-awesome:ro
+
+

You can use the full range of Font Awesome icons afterwards.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/INITIAL_SETUP/index.html b/INITIAL_SETUP/index.html new file mode 100644 index 00000000..bc26b157 --- /dev/null +++ b/INITIAL_SETUP/index.html @@ -0,0 +1,4335 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Quick setup - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

⚡ Quick Start Guide

+

Get NetAlertX up and running in a few simple steps.

+
+

1. Configure Scanner Plugin(s)

+
+

Tip

+

Enable additional plugins under Settings → LOADED_PLUGINS. +Make sure to save your changes and reload the page to activate them. +Loaded plugins settings

+
+

Initial configuration: ARPSCAN, INTRNT

+
+

Note

+

ARPSCAN and INTRNT scan the current network. You can complement them with other 🔍 dev scanner plugins like NMAPDEV, or import devices using 📥 importer plugins. +See the Subnet & VLAN Setup Guide and Remote Networks for advanced configurations.

+
+
+

2. Choose a Publisher Plugin

+

Initial configuration: SMTP

+
+

Note

+

Configure your SMTP settings or enable additional ▶️ publisher plugins to send alerts. +For more flexibility, try 📚 _publisher_apprise, which supports over 80 notification services.

+
+
+

3. Set Up a Network Topology Diagram

+

Network tree

+

Initial configuration: The app auto-selects a root node (MAC internet) and attempts to identify other network devices by vendor or name.

+
+

Note

+

Visualize and manage your network using the Network Guide. +Some plugins (e.g., UNFIMP) build the topology automatically, or you can use Custom Workflows to generate it based on your own rules.

+
+
+

4. Configure Notifications

+

Notification settings

+

Initial configuration: Notifies on new_devices, down_devices, and events as defined in NTFPRCS_INCLUDED_SECTIONS.

+
+

Note

+

Notification settings support global, plugin-specific, and per-device rules. +For fine-tuning, refer to the Notification Guide.

+
+
+

5. Set Up Workflows

+

Workflows

+

Initial configuration: N/A

+
+

Note

+

Automate responses to device status changes, group management, topology updates, and more. +See the Workflows Guide to simplify your network operations.

+
+
+

6. Backup Your Configuration

+

Backups

+

Initial configuration: The CSVBCKP plugin creates a daily backup to /config/devices.csv.

+
+

Note

+

For a complete backup strategy, follow the Backup Guide.

+
+
+

7. (Optional) Create Custom Plugins

+

Custom Plugin Video

+

Initial configuration: N/A

+
+

Note

+

Build your own scanner, importer, or publisher plugin. +See the Plugin Development Guide and included video tutorials.

+
+
+ + +
+

🛠️ Troubleshooting & Help

+

Before opening a new issue:

+ +
+

Let me know if you want a condensed README version, separate pages for each section, or UI copy based on this!

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/INSTALLATION/index.html b/INSTALLATION/index.html new file mode 100644 index 00000000..b797af74 --- /dev/null +++ b/INSTALLATION/index.html @@ -0,0 +1,4117 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Installation options - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Installation

+

Installation options

+

NetAlertX can be installed several ways. The best supported option is Docker, followed by a supervised Home Assistant instance, as an Unraid app, and lastly, on bare metal.

+ +

Help

+

If facing issues, please spend a few minutes seraching.

+ +
+

Note

+

If you can't find a solution anywhere, ask in Discord if you think it's a quick question, otherwise open a new issue. Please fill in as much as possible to speed up the help process.

+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/LOGGING/index.html b/LOGGING/index.html new file mode 100644 index 00000000..15d4fff7 --- /dev/null +++ b/LOGGING/index.html @@ -0,0 +1,4190 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Inspecting Logs - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Logging

+

NetAlertX comes with several logs that help to identify application issues. These include nginx logs, app, or plugin logs. For plugin-specific log debugging, please read the Debug Plugins guide.

+
+

Note

+

When debugging any issue, increase the LOG_LEVEL Setting as per the Debug tips documentation.

+
+

Main logs

+

You can find most of the logs exposed in the UI under Maintenance -> Logs.

+

If the UI is inaccessible, you can access them under /tmp/log.

+

Logs

+

In the Maintennace -> Logs you can Purge logs, download the full log file or Filter the lines with some substring to narrow down your search.

+

Plugin logging

+

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/). These files are processed at the end of the scan and deleted on successful processing.

+

The data is in most of the cases then displayed in the application under Integrations -> Plugins (or Device -> Plugins if the plugin is supplying device-specific data).

+

Plugin objects

+

Viewing Logs on the File System

+

You cannot find any log files on the filesystem. The container is read-only and writes logs to a temporary in-memory filesystem (tmpfs) for security and performance. The application follows container best-practices by writing all logs to the standard output (stdout) and standard error (stderr) streams. Docker's logging driver (set in docker-compose.yml) captures this stream automatically, allowing you to access it with the docker logs <image_name> command.

+
    +
  • To see all logs since the last restart:
  • +
+

bash + docker logs netalertx +* To watch the logs live (live feed):

+

bash + docker logs -f netalertx

+

Enabling Persistent File-Based Logs

+

The default logs are erased every time the container restarts because they are stored in temporary in-memory storage (tmpfs). If you need to keep a persistent, file-based log history, follow the steps below.

+
+

Note

+

This might lead to performance degradation so this approach is only suggested when actively debugging issues. See the Performance optimization documentation for details.

+
+
    +
  1. Stop the container:
  2. +
+

bash + docker-compose down

+
    +
  1. +

    Edit your docker-compose.yml file:

    +
  2. +
  3. +

    Comment out the /tmp/log line under the tmpfs: section.

    +
  4. +
  5. Uncomment the "Retain logs" line under the volumes: section and set your desired host path.
  6. +
+

yaml + ... + tmpfs: + # - "/tmp/log:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime" + ... + volumes: + ... + # Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts + - /home/adam/netalertx_logs:/tmp/log + ... +3. Restart the container:

+

bash + docker-compose up -d

+

This change stops Docker from mounting a temporary in-memory volume at /tmp/log. Instead, it "bind mounts" a persistent folder from your host computer (e.g., /data/netalertx_logs) to that same location inside the container.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/MIGRATION/index.html b/MIGRATION/index.html new file mode 100644 index 00000000..ee89b03a --- /dev/null +++ b/MIGRATION/index.html @@ -0,0 +1,4861 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Migration Guide - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Migration

+

When upgrading from older versions of NetAlertX (or PiAlert by jokob-sk), follow the migration steps below to ensure your data and configuration are properly transferred.

+
+

Tip

+

It's always important to have a backup strategy in place.

+
+

Migration scenarios

+ +

1.0 Manual Migration

+

You can migrate data manually, for example by exporting and importing devices using the CSV import method.

+

1.1 Migration from PiAlert to NetAlertX v25.5.24

+

STEPS:

+

The application will automatically migrate the database, configuration, and all device information. +A banner message will appear at the top of the web UI reminding you to update your Docker mount points.

+
    +
  1. Stop the container
  2. +
  3. Back up your setup
  4. +
  5. Update the Docker file mount locations in your docker-compose.yml or docker run command (See below New Docker mount locations).
  6. +
  7. Rename the DB and conf files to app.db and app.conf and place them in the appropriate location.
  8. +
  9. Start the container
  10. +
+
+

Tip

+

If you have trouble accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: cp -r /data/config /home/pi/pialert/config/old_backup_files. This should create a folder in the config directory called old_backup_files containing all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.

+
+

New Docker mount locations

+

The internal application path in the container has changed from /home/pi/pialert to /app. Update your volume mounts as follows:

+ + + + + + + + + + + + + + + + + +
Old mount pointNew mount point
/home/pi/pialert/config/data/config
/home/pi/pialert/db/data/db
+

If you were mounting files directly, please note the file names have changed:

+ + + + + + + + + + + + + + + + + +
Old file nameNew file name
pialert.confapp.conf
pialert.dbapp.db
+
+

Note

+

The application automatically creates symlinks from the old database and config locations to the new ones, so data loss should not occur. Read the backup strategies guide to backup your setup.

+
+

Examples

+

Examples of docker files with the new mount points.

+
Example 1: Mapping folders
+
Old docker-compose.yml
+
services:
+  pialert:
+    container_name: pialert
+    # use the below line if you want to test the latest dev image
+    # image: "ghcr.io/jokob-sk/netalertx-dev:latest"
+    image: "jokobsk/pialert:latest"
+    network_mode: "host"
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir/config:/home/pi/pialert/config
+      - /local_data_dir/db:/home/pi/pialert/db
+      # (optional) useful for debugging if you have issues setting up the container
+      - /local_data_dir/logs:/home/pi/pialert/front/log
+    environment:
+      - TZ=Europe/Berlin
+      - PORT=20211
+
+
New docker-compose.yml
+
services:
+  netalertx:                                  # 🆕 This has changed
+    container_name: netalertx                 # 🆕 This has changed
+    image: "ghcr.io/jokob-sk/netalertx:25.5.24"         # 🆕 This has changed
+    network_mode: "host"
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir/config:/data/config         # 🆕 This has changed
+      - /local_data_dir/db:/data/db                 # 🆕 This has changed
+      # (optional) useful for debugging if you have issues setting up the container
+      - /local_data_dir/logs:/tmp/log        # 🆕 This has changed
+    environment:
+      - TZ=Europe/Berlin
+      - PORT=20211
+
+
Example 2: Mapping files
+
+

Note

+

The recommendation is to map folders as in Example 1, map files directly only when needed.

+
+
Old docker-compose.yml
+
services:
+  pialert:
+    container_name: pialert
+    # use the below line if you want to test the latest dev image
+    # image: "ghcr.io/jokob-sk/netalertx-dev:latest"
+    image: "jokobsk/pialert:latest"
+    network_mode: "host"
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir/config/pialert.conf:/home/pi/pialert/config/pialert.conf
+      - /local_data_dir/db/pialert.db:/home/pi/pialert/db/pialert.db
+      # (optional) useful for debugging if you have issues setting up the container
+      - /local_data_dir/logs:/home/pi/pialert/front/log
+    environment:
+      - TZ=Europe/Berlin
+      - PORT=20211
+
+
New docker-compose.yml
+
services:
+  netalertx:                                  # 🆕 This has changed
+    container_name: netalertx                 # 🆕 This has changed
+    image: "ghcr.io/jokob-sk/netalertx:25.5.24"         # 🆕 This has changed
+    network_mode: "host"
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir/config/app.conf:/data/config/app.conf # 🆕 This has changed
+      - /local_data_dir/db/app.db:/data/db/app.db             # 🆕 This has changed
+      # (optional) useful for debugging if you have issues setting up the container
+      - /local_data_dir/logs:/tmp/log                  # 🆕 This has changed
+    environment:
+      - TZ=Europe/Berlin
+      - PORT=20211
+
+

1.2 Migration from NetAlertX v25.5.24

+

Versions before v25.10.1 require an intermediate migration through v25.5.24 to ensure database compatibility. Skipping this step may cause compatibility issues due to database schema changes introduced after v25.5.24.

+

STEPS:

+
    +
  1. Stop the container
  2. +
  3. Back up your setup
  4. +
  5. Upgrade to v25.5.24 by pinning the release version (See Examples below)
  6. +
  7. Start the container and verify everything works as expected.
  8. +
  9. Stop the container
  10. +
  11. Upgrade to v25.10.1 by pinning the release version (See Examples below)
  12. +
  13. Start the container and verify everything works as expected.
  14. +
+

Examples

+

Examples of docker files with the tagged version.

+
Example 1: Mapping folders
+
docker-compose.yml changes
+
services:
+  netalertx:
+    container_name: netalertx
+    image: "ghcr.io/jokob-sk/netalertx:25.5.24"         # 🆕 This is important
+    network_mode: "host"
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir/config:/data/config
+      - /local_data_dir/db:/data/db
+      # (optional) useful for debugging if you have issues setting up the container
+      - /local_data_dir/logs:/tmp/log
+    environment:
+      - TZ=Europe/Berlin
+      - PORT=20211
+
+
services:
+  netalertx:
+    container_name: netalertx
+    image: "ghcr.io/jokob-sk/netalertx:25.10.1"         # 🆕 This is important
+    network_mode: "host"
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir/config:/data/config
+      - /local_data_dir/db:/data/db
+      # (optional) useful for debugging if you have issues setting up the container
+      - /local_data_dir/logs:/tmp/log
+    environment:
+      - TZ=Europe/Berlin
+      - PORT=20211
+
+

1.3 Migration from NetAlertX v25.10.1

+

Starting from v25.10.1, the container uses a more secure, read-only runtime environment, which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as tmpfs or permanent writable volumes, with sufficient access permissions. The data location has also hanged from /app/db and /app/config to /data/db and /data/config. See detailed steps below.

+

STEPS:

+
    +
  1. Stop the container
  2. +
  3. Back up your setup
  4. +
  5. Upgrade to v25.10.1 by pinning the release version (See the example below)
  6. +
+
services:
+  netalertx:
+    container_name: netalertx
+    image: "ghcr.io/jokob-sk/netalertx:25.10.1"         # 🆕 This is important
+    network_mode: "host"
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir/config:/app/config
+      - /local_data_dir/db:/app/db
+      # (optional) useful for debugging if you have issues setting up the container
+      - /local_data_dir/logs:/tmp/log
+    environment:
+      - TZ=Europe/Berlin
+      - PORT=20211
+
+
    +
  1. Start the container and verify everything works as expected.
  2. +
  3. Stop the container.
  4. +
  5. Update the docker-compose.yml as per example below.
  6. +
+
services:
+  netalertx:
+    container_name: netalertx
+    image: "ghcr.io/jokob-sk/netalertx"  # 🆕 This has changed
+    network_mode: "host"
+    cap_drop:                # 🆕 New line
+      - ALL                  # 🆕 New line
+    cap_add:                 # 🆕 New line
+      - NET_RAW              # 🆕 New line
+      - NET_ADMIN            # 🆕 New line
+      - NET_BIND_SERVICE     # 🆕 New line
+    restart: unless-stopped
+    volumes:
+      - /local_data_dir:/data  # 🆕 This folder contains your /db and /config directories and the parent changed from /app to /data
+      # Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
+      - /etc/localtime:/etc/localtime:ro    # 🆕 New line
+    environment:
+      - PORT=20211
+    # 🆕 New "tmpfs" section START 🔽
+    tmpfs:
+      # All writable runtime state resides under /tmp; comment out to persist logs between restarts
+      - "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+    # 🆕 New "tmpfs" section END  🔼
+
+
    +
  1. Perform a one-off migration to the latest netalertx image and 20211 user.
  2. +
+
+

Note

+

The examples below assumes your /config and /db folders are stored in local_data_dir. +Replace this path with your actual configuration directory. netalertx is the container name, which might differ from your setup.

+
+

Automated approach:

+

Run the container with the --user "0" parameter. Please note, some systems will require the manual approach below.

+
docker run -it --rm --name netalertx --user "0" \
+  -v /local_data_dir/config:/app/config \
+  -v /local_data_dir/db:/app/db \
+  -v /local_data_dir:/data \
+  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
+  ghcr.io/jokob-sk/netalertx:latest
+
+

Stop the container and run it as you would normally.

+

Manual approach:

+

Use the manual approach if the Automated approach fails. Execute the below commands:

+
sudo chown -R 20211:20211 /local_data_dir
+sudo chmod -R a+rwx /local_data_dir
+
+
    +
  1. Start the container and verify everything works as expected.
  2. +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/NAME_RESOLUTION/index.html b/NAME_RESOLUTION/index.html new file mode 100644 index 00000000..6255d6e6 --- /dev/null +++ b/NAME_RESOLUTION/index.html @@ -0,0 +1,4200 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Name Resolution - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Device Name Resolution

+

Name resolution in NetAlertX relies on multiple plugins to resolve device names from IP addresses. If you are seeing (name not found) as device names, follow these steps to diagnose and fix the issue.

+
+

Tip

+

Before proceeding, make sure Reverse DNS is enabled on your network.
+You can control how names are handled and cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting.
+To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

+
+

Required Plugins

+

For best results, ensure the following name resolution plugins are enabled:

+
    +
  • AVAHISCAN – Uses mDNS/Avahi to resolve local network names.
  • +
  • NBTSCAN – Queries NetBIOS to find device names.
  • +
  • NSLOOKUP – Performs standard DNS lookups.
  • +
  • DIGSCAN – Performs Name Resolution with the Dig utility (DNS).
  • +
+

You can check which plugins are active in your Settings section and enable any that are missing.

+

There are other plugins that can supply device names as well, but they rely on bespoke hardware and services. See Plugins overview for details and look for plugins with name discovery (🆎) features.

+

Checking Logs

+

If names are not resolving, check the logs for errors or timeouts.

+

See how to explore logs in the Logging guide.

+

Logs will show which plugins attempted resolution and any failures encountered.

+

Adjusting Timeout Settings

+

If resolution is slow or failing due to timeouts, increase the timeout settings in your configuration, for example.

+
NSLOOKUP_RUN_TIMEOUT = 30
+
+

Raising the timeout may help if your network has high latency or slow DNS responses.

+

Checking Plugin Objects

+

Each plugin stores results in its respective object. You can inspect these objects to see if they contain valid name resolution data.

+

See Logging guide and Debug plugins guides for details.

+

If the object contains no results, the issue may be with DNS settings or network access.

+

Improving name resolution

+

For more details how to improve name resolution refer to the +Reverse DNS Documentation.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/NETWORK_TREE/index.html b/NETWORK_TREE/index.html new file mode 100644 index 00000000..bf93fb43 --- /dev/null +++ b/NETWORK_TREE/index.html @@ -0,0 +1,4315 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Network Topology - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Network Topology

+ +

How to Set Up Your Network Page

+

The Network page lets you map how devices connect — visually and logically. +It’s especially useful for planning infrastructure, assigning parent-child relationships, and spotting gaps.

+

Network tree details

+

To get started, you’ll need to define at least one root node and mark certain devices as network nodes (like Switches or Routers).

+
+

Start by creating a root device with the MAC address Internet, if the application didn’t create one already. +This special MAC address (Internet) is required for the root network node — no other value is currently supported. +Set its Type to a valid network type — such as Router or Gateway.

+
+

Tip

+

If you don’t have one, use the Create new device button on the Devices page to add a root device.

+
+
+

⚡ Quick Setup

+
    +
  1. Open the device you want to use as a network node (e.g. a Switch).
  2. +
  3. Set its Type to one of the following: + AP, Firewall, Gateway, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN + (Or add custom types under Settings → General → NETWORK_DEVICE_TYPES.)
  4. +
  5. Save the device.
  6. +
  7. Go to the Network page — supported device types will appear as tabs.
  8. +
  9. Use the Assign button to connect unassigned devices to a network node.
  10. +
  11. If the Port is 0 or empty, a Wi-Fi icon is shown. Otherwise, an Ethernet icon appears.
  12. +
+
+

Note

+

Use bulk editing with CSV Export to fix Internet root assignments or update many devices at once.

+
+
+

Example: Setting up a raspberrypi as a Switch

+

Let’s walk through setting up a device named raspberrypi to act as a network Switch that other devices connect through.

+
+

1. Set Device Type and Parent

+
    +
  • Go to the Devices page
  • +
  • Open the device detail view for raspberrypi
  • +
  • In the Type dropdown, select Switch
  • +
+

Device details

+
    +
  • Optionally assign a Parent Node (where this device connects to) and the Relationship type of the connection. + The nic relationship type can affect parent notifications — see the setting description and Notifications documentation for more.
  • +
  • A device’s parent MAC will be overwritten by plugins if its current value is any of the following: "null", "(unknown)" "(Unknown)".
  • +
  • If you want plugins to be able to overwrite the parent value (for example, when mixing plugins that do not provide parent MACs like ARPSCAN with those that do, like UNIFIAPI), you must set the setting NEWDEV_devParentMAC to None.
  • +
+

Device details

+
+

Note

+

Only certain device types can act as network nodes: +AP, Firewall, Gateway, Hypervisor, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN +You can add custom types via the NETWORK_DEVICE_TYPES setting.

+
+
    +
  • Click Save
  • +
+
+

2. Confirm The Device Appears as a Network Node

+

You can confirm that raspberrypi now acts as a network device in two places:

+
    +
  • Navigate to a different device and verify that raspberrypi now appears as an option for a Parent Node:
  • +
+

Parent Node dropdown

+
    +
  • Go to the Network page — you'll now see a raspberrypi tab, meaning it's recognized as a network node (Switch):
  • +
+

Network page

+
    +
  • You can now assign other devices to it.
  • +
+
+

3. Assign Connected Devices

+
    +
  • Use the Assign button to link other devices (e.g. PCs) to raspberrypi.
  • +
  • After assigning, connected devices will appear beneath the raspberrypi switch node.
  • +
+

Assigned nodes

+
    +
  • Relationship lines may vary in color based on the selected Relationship type. These are editable on the device details page where you can also assign a parent node.
  • +
+

Hover detail

+
+

Hovering over devices in the tree reveals connection details and tooltips for quick inspection.

+
+
+

Note

+

Selecting certain relationship types hides the device in the default device views. +You can change this behavior by adjusting the UI_hide_rel_types setting, which by default is set to ["nic","virtual"]. +This means devices with devParentRelType set to nic or virtual will not be shown. +All devices, regardless of relationship type, are always accessible in the All devices view.

+
+
+

✅ Summary

+

To configure devices on the Network page:

+
    +
  • Ensure a device with MAC Internet is set up as the root
  • +
  • Assign valid Type values to switches, routers, and other supported nodes that represent network devices
  • +
  • Use the Assign button to connect devices logically to their parent node
  • +
+

Need to reset or undo changes? Use backups or bulk editing to manage devices at scale. You can also automate device assignment with Workflows.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/NOTIFICATIONS/index.html b/NOTIFICATIONS/index.html new file mode 100644 index 00000000..0af791bb --- /dev/null +++ b/NOTIFICATIONS/index.html @@ -0,0 +1,4197 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Notifications Guide - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Notifications 📧

+

There are 4 ways how to influence notifications:

+
    +
  1. On the device itself
  2. +
  3. On the settings of the plugin
  4. +
  5. Globally
  6. +
  7. Ignoring devices
  8. +
+
+

Note

+

It's recommended to use the same schedule interval for all plugins responsible for scanning devices, otherwise false positives might be reported if different devices are discovered by different plugins. Check the Settings > Enabled settings section for a warning: +Schedules out-of-sync

+
+

Device settings 💻

+

Device notification settings

+

The following device properties influence notifications. You can:

+
    +
  1. Alert Events - Enables alerts of connections, disconnections, IP changes (down and down reconnected notifications are still sent even if this is disabled).
  2. +
  3. Alert Down - Alerts when a device goes down. This setting overrides a disabled Alert Events setting, so you will get a notification of a device going down even if you don't have Alert Events ticked. Disabling this will disable down and down reconnected notifications on the device.
  4. +
  5. Skip repeated notifications, if for example you know there is a temporary issue and want to pause the same notification for this device for a given time.
  6. +
  7. Require NICs Online - Indicates whether this device should be considered online only if all associated NICs (devices with the nic relationship type) are online. If disabled, the device is considered online if any NIC is online. If a NIC is online it sets the parent (this) device's status to online irrespectivelly of the detected device's status. The Relationship type is set on the childern device.
  8. +
+
+

Note

+

Please read through the NTFPRCS plugin documentation to understand how device and global settings influence the notification processing.

+
+

Plugin settings 🔌

+

Plugin notification settings

+

On almost all plugins there are 2 core settings, <plugin>_WATCH and <plugin>_REPORT_ON.

+
    +
  1. <plugin>_WATCH specifies the columns which the app should watch. If watched columns change the device state is considered changed. This changed status is then used to decide to send out notifications based on the <plugin>_REPORT_ON setting.
  2. +
  3. <plugin>_REPORT_ON let's you specify on which events the app should notify you. This is related to the <plugin>_WATCH setting. So if you select watched-changed and in <plugin>_WATCH you only select Watched_Value1, then a notification is triggered if Watched_Value1 is changed from the previous value, but no notification is send if Watched_Value2 changes.
  4. +
+

Click the Read more in the docs. Link at the top of each plugin to get more details on how the given plugin works.

+

Global settings ⚙

+

Global notification settings

+

In Notification Processing settings, you can specify blanket rules. These allow you to specify exceptions to the Plugin and Device settings and will override those.

+
    +
  1. Notify on (NTFPRCS_INCLUDED_SECTIONS) allows you to specify which events trigger notifications. Usual setups will have new_devices, down_devices, and possibly down_reconnected set. Including plugin (dependenton the Plugin <plugin>_WATCH and <plugin>_REPORT_ON settings) and events (dependent on the on-device Alert Events setting) might be too noisy for most setups. More info in the NTFPRCS plugin on what events these selections include.
  2. +
  3. Alert down after (NTFPRCS_alert_down_time) is useful if you want to wait for some time before the system sends out a down notification for a device. This is related to the on-device Alert down setting and only devices with this checked will trigger a down notification.
  4. +
+

You can filter out unwanted notifications globally. This could be because of a misbehaving device (GoogleNest/GoogleHub (See also ARPSAN docs and the --exclude-broadcast flag)) which flips between IP addresses, or because you want to ignore new device notifications of a certain pattern.

+
    +
  1. Events Filter (NTFPRCS_event_condition) - Filter out Events from notifications.
  2. +
  3. New Devices Filter (NTFPRCS_new_dev_condition) - Filter out New Devices from notifications, but log and keep a new device in the system.
  4. +
+

Ignoring devices 💻

+

Ignoring new devices

+

You can completely ignore detected devices globally. This could be because your instance detects docker containers, you want to ignore devices from a specific manufacturer via MAC rules or you want to ignore devices on a specific IP range.

+
    +
  1. Ignored MACs (NEWDEV_ignored_MACs) - List of MACs to ignore.
  2. +
  3. Ignored IPs (NEWDEV_ignored_IPs) - List of IPs to ignore.
  4. +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/PERFORMANCE/index.html b/PERFORMANCE/index.html new file mode 100644 index 00000000..3ba1c6ed --- /dev/null +++ b/PERFORMANCE/index.html @@ -0,0 +1,4358 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Performance - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Performance Optimization Guide

+

There are several ways to improve the application's performance. The application has been tested on a range of devices, from Raspberry Pi 4 units to NAS and NUC systems. If you are running the application on a lower-end device, fine-tuning the performance settings can significantly improve the user experience.

+

Common Causes of Slowness

+

Performance issues are usually caused by:

+
    +
  • Incorrect settings – The app may restart unexpectedly. Check app.log under Maintenance → Logs for details.
  • +
  • Too many background processes – Disable unnecessary scanners.
  • +
  • Long scan durations – Limit the number of scanned devices.
  • +
  • Excessive disk operations – Optimize scanning and logging settings.
  • +
  • Maintenance plugin failures – If cleanup tasks fail, performance can degrade over time.
  • +
+

The application performs regular maintenance and database cleanup. If these tasks are failing, you will see slowdowns.

+

Database and Log File Size

+

A large database or oversized log files can impact performance. You can check database and table sizes on the Maintenance page.

+

DB size check

+
+

Note

+
    +
  • For ~100 devices, the database should be around 50 MB.
  • +
  • No table should exceed 10,000 rows in a healthy system.
  • +
  • Actual values vary based on network activity and plugin settings.
  • +
+
+
+

Maintenance Plugins

+

Two plugins help maintain the system’s performance:

+

1. Database Cleanup (DBCLNP)

+
    +
  • Handles database maintenance and cleanup.
  • +
  • See the DB Cleanup Plugin Docs.
  • +
  • Ensure it’s not failing by checking logs.
  • +
  • Adjust the schedule (DBCLNP_RUN_SCHD) and timeout (DBCLNP_RUN_TIMEOUT) if necessary.
  • +
+

2. Maintenance (MAINT)

+
    +
  • Cleans logs and performs general maintenance tasks.
  • +
  • See the Maintenance Plugin Docs.
  • +
  • Verify proper operation via logs.
  • +
  • Adjust the schedule (MAINT_RUN_SCHD) and timeout (MAINT_RUN_TIMEOUT) if needed.
  • +
+
+

Scan Frequency and Coverage

+

Frequent scans increase resource usage, network traffic, and database read/write cycles.

+

Optimizations

+
    +
  • Increase scan intervals (<PLUGIN>_RUN_SCHD) on busy networks or low-end hardware.
  • +
  • Increase timeouts (<PLUGIN>_RUN_TIMEOUT) to avoid plugin failures.
  • +
  • Reduce subnet size – e.g., use /24 instead of /16 to reduce scan load.
  • +
+

Some plugins also include options to limit which devices are scanned. If certain plugins consistently run long, consider narrowing their scope.

+

For example, the ICMP plugin allows scanning only IPs that match a specific regular expression.

+
+

Storing Temporary Files in Memory

+

On devices with slower I/O, you can improve performance by storing temporary files (and optionally the database) in memory using tmpfs.

+
+

Warning

+

Storing the database in tmpfs is generally discouraged. Use this only if device data and historical records are not required to persist. If needed, you can pair this setup with the SYNC plugin to store important persistent data on another node. See the Plugins docs for details.

+
+

Using tmpfs reduces disk writes and speeds up I/O, but all data stored in memory will be lost on restart.

+

Below is an optimized docker-compose.yml snippet using non-persistent logs, API data, and DB:

+
services:
+  netalertx:
+    container_name: netalertx
+    # Use this line for the stable release
+    image: "ghcr.io/jokob-sk/netalertx:latest"
+    # Or use this line for the latest development build
+    # image: "ghcr.io/jokob-sk/netalertx-dev:latest"
+    network_mode: "host"
+    restart: unless-stopped
+
+    cap_drop:       # Drop all capabilities for enhanced security
+      - ALL
+    cap_add:        # Re-add necessary capabilities
+      - NET_RAW
+      - NET_ADMIN
+      - NET_BIND_SERVICE
+
+    volumes:
+      - ${APP_FOLDER}/netalertx/config:/data/config
+      - /etc/localtime:/etc/localtime:ro
+
+    tmpfs:
+      # All writable runtime state resides under /tmp; comment out to persist logs between restarts
+      - "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+      - "/data/db:uid=20211,gid=20211,mode=1700"  # ⚠ You will lose historical data on restart
+
+    environment:
+      - PORT=${PORT}
+      - APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/PIHOLE_GUIDE/index.html b/PIHOLE_GUIDE/index.html new file mode 100644 index 00000000..781d404e --- /dev/null +++ b/PIHOLE_GUIDE/index.html @@ -0,0 +1,4385 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Pi-hole Guide - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Integration with PiHole

+

NetAlertX comes with 3 plugins suitable for integrating with your existing PiHole instance. The first plugin uses the v6 API, the second plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine multiple approaches and also supplement scans with other plugins.

+

Approach 1: PIHOLEAPI Plugin - Import devices directly from PiHole v6 API

+

PIHOLEAPI sample settings

+

To use this approach make sure the Web UI password in Pi-hole is set.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SettingDescriptionRecommended value
PIHOLEAPI_URLYour Pi-hole base URL including port.http://192.168.1.82:9880/
PIHOLEAPI_RUN_SCHDIf you run multiple device scanner plugins, align the schedules of all plugins to the same value.*/5 * * * *
PIHOLEAPI_PASSWORDThe Web UI base64 encoded (en-/decoding handled by the app) admin password.passw0rd
PIHOLEAPI_SSL_VERIFYWhether to verify HTTPS certificates. Disable only for self-signed certificates.False
PIHOLEAPI_API_MAXCLIENTSMaximum number of devices to request from Pi-hole. Defaults are usually fine.500
PIHOLEAPI_FAKE_MACGenerate FAKE MAC from IP.False
+

Check the PiHole API plugin readme for details and troubleshooting.

+

docker-compose changes

+

No changes needed

+

Approach 2: DHCPLSS Plugin - Import devices from the PiHole DHCP leases file

+

DHCPLSS sample settings

+

Settings

+ + + + + + + + + + + + + + + + + + + + + + + + + +
SettingDescriptionRecommended value
DHCPLSS_RUNWhen the plugin should run.schedule
DHCPLSS_RUN_SCHDIf you run multiple device scanner plugins, align the schedules of all plugins to the same value.*/5 * * * *
DHCPLSS_paths_to_checkYou need to map the value in this setting in the docker-compose.yml file. The in-container path must contain pihole so it's parsed correctly.['/etc/pihole/dhcp.leases']
+

Check the DHCPLSS plugin readme for details

+

docker-compose changes

+ + + + + + + + + + + + + +
PathDescription
:/etc/pihole/dhcp.leasesPiHole's dhcp.leases file. Required if you want to use PiHole dhcp.leases file. This has to be matched with a corresponding DHCPLSS_paths_to_check setting entry (the path in the container must contain pihole)
+

Approach 3: PIHOLE Plugin - Import devices directly from the PiHole database

+

DHCPLSS sample settings

+ + + + + + + + + + + + + + + + + + + + + + + + + +
SettingDescriptionRecommended value
PIHOLE_RUNWhen the plugin should run.schedule
PIHOLE_RUN_SCHDIf you run multiple device scanner plugins, align the schedules of all plugins to the same value.*/5 * * * *
PIHOLE_DB_PATHYou need to map the value in this setting in the docker-compose.yml file./etc/pihole/pihole-FTL.db
+

Check the PiHole plugin readme for details

+

docker-compose changes

+ + + + + + + + + + + + + +
PathDescription
:/etc/pihole/pihole-FTL.dbPiHole's pihole-FTL.db database file.
+

Check out other plugins that can help you discover more about your network or check how to scan Remote networks.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/PLUGINS/index.html b/PLUGINS/index.html new file mode 100644 index 00000000..0606bdaf --- /dev/null +++ b/PLUGINS/index.html @@ -0,0 +1,4727 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Enable Plugins - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

🔌 Plugins

+

NetAlertX supports additional plugins to extend its functionality, each with its own settings and options. Plugins can be loaded via the General -> LOADED_PLUGINS setting. For custom plugin development, refer to the Plugin development guide.

+
+

Note

+

Please check this Plugins debugging guide and the corresponding Plugin documentation in the below table if you are facing issues.

+
+

⚡ Quick start

+
+

Tip

+

You can load additional Plugins via the General -> LOADED_PLUGINS setting. You need to save the settings for the new plugins to load (cache/page reload may be necessary). +Loaded plugins settings

+
+
    +
  1. Pick your 🔍 dev scanner plugin (e.g. ARPSCAN or NMAPDEV), or import devices into the application with an 📥 importer plugin. (See Enabling plugins below)
  2. +
  3. Pick a ▶️ publisher plugin, if you want to send notifications. If you don't see a publisher you'd like to use, look at the 📚_publisher_apprise plugin which is a proxy for over 80 notification services.
  4. +
  5. Setup your Network topology diagram
  6. +
  7. Fine-tune Notifications
  8. +
  9. Setup Workflows
  10. +
  11. Backup your setup
  12. +
  13. Contribute and Create custom plugins
  14. +
+

Plugin types

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Plugin typeIconDescriptionWhen to runRequiredData source ?
publisher▶️Sending notifications to services.on_notificationScript
dev scanner🔍Create devices in the app, manages online/offline device status.scheduleScript / SQLite DB
name discovery🆎Discovers names of devices via various protocols.before_name_updates, scheduleScript
importer📥Importing devices from another service.scheduleScript / SQLite DB
systemProviding core system functionality.schedule / always on✖/✔Script / Template
otherOther pluginsmiscScript / Template
+

Features

+ + + + + + + + + + + + + + + + + +
IconDescription
🖧Auto-imports the network topology diagram
🔄Has the option to sync some data back into the plugin source
+

Available Plugins

+

Device-detecting plugins insert values into the CurrentScan database table. The plugins that are not required are safe to ignore, however, it makes sense to have at least some device-detecting plugins enabled, such as ARPSCAN or NMAPDEV.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
IDPlugin docsTypeDescriptionFeaturesRequired
APPRISE_publisher_apprise▶️Apprise notification proxy
ARPSCANarp_scan🔍ARP-scan on current network
AVAHISCANavahi_scan🆎Avahi (mDNS-based) name resolution
ASUSWRTasuswrt_import🔍Import connected devices from AsusWRT
CSVBCKPcsv_backupCSV devices backup
CUSTPROPcustom_propsManaging custom device properties valuesYes
DBCLNPdb_cleanupDatabase cleanupYes*
DDNSddns_updateDDNS update
DHCPLSSdhcp_leases🔍/📥/🆎Import devices from DHCP leases
DHCPSRVSdhcp_serversDHCP servers
DIGSCANdig_scan🆎Dig (DNS) Name resolution
FREEBOXfreebox🔍/♻/🆎Pull data and names from Freebox/Iliadbox
ICMPicmp_scanICMP (ping) status checker
INTRNTinternet_ip🔍Internet IP scanner
INTRSPDinternet_speedtestInternet speed test
IPNEIGHipneigh🔍Scan ARP (IPv4) and NDP (IPv6) tables
LUCIRPCluci_import🔍Import connected devices from OpenWRT
MAINTmaintenanceMaintenance of logs, etc.
MQTT_publisher_mqtt▶️MQTT for synching to Home Assistant
MTSCANmikrotik_scan🔍Mikrotik device import & sync
NBTSCANnbtscan_scan🆎Nbtscan (NetBIOS-based) name resolution
NEWDEVnewdev_templateNew device templateYes
NMAPnmap_scanNmap port scanning & discovery
NMAPDEVnmap_dev_scan🔍Nmap dev scan on current network
NSLOOKUPnslookup_scan🆎NSLookup (DNS-based) name resolution
NTFPRCSnotification_processingNotification processingYes
NTFY_publisher_ntfy▶️NTFY notifications
OMDSDNomada_sdn_imp📥/🆎 ❌UNMAINTAINED use OMDSDNOPENAPI🖧 🔄
OMDSDNOPENAPIomada_sdn_openapi📥/🆎OMADA TP-Link import via OpenAPI🖧
PIHOLEpihole_scan🔍/🆎/📥Pi-hole device import & sync
PIHOLEAPIpihole_api_scan🔍/🆎/📥Pi-hole device import & sync via API v6+
PUSHSAFER_publisher_pushsafer▶️Pushsafer notifications
PUSHOVER_publisher_pushover▶️Pushover notifications
SETPWDset_passwordSet passwordYes
SMTP_publisher_email▶️Email notifications
SNMPDSCsnmp_discovery🔍/📥SNMP device import & sync
SYNCsync🔍/⚙/📥Sync & import from NetAlertX instances🖧 🔄Yes
TELEGRAM_publisher_telegram▶️Telegram notifications
UIui_settingsUI specific settingsYes
UNFIMPunifi_import🔍/📥/🆎UniFi device import & sync🖧
UNIFIAPIunifi_api_import🔍/📥/🆎UniFi device import (SM API, multi-site)
VNDRPDTvendor_updateVendor database update
WEBHOOK_publisher_webhook▶️Webhook notifications
WEBMONwebsite_monitorWebsite down monitoring
WOLwake_on_lanAutomatic wake-on-lan
+
+

* The database cleanup plugin (DBCLNP) is not required but the app will become unusable after a while if not executed. +❌ marked for removal/unmaintained - looking for help +⌚It's recommended to use the same schedule interval for all plugins responsible for discovering new devices.

+
+

Enabling plugins

+

Plugins can be enabled via Settings, and can be disabled as needed.

+
    +
  1. Research which plugin you'd like to use, enable DISCOVER_PLUGINS and load the required plugins in Settings via the LOADED_PLUGINS setting.
  2. +
  3. Save the changes and review the Settings of the newly loaded plugins.
  4. +
  5. Change the <prefix>_RUN Setting to the recommended or custom value as per the documentation of the given setting
      +
    • If using schedule on a 🔍 dev scanner plugin, make sure the schedules are the same across all 🔍 dev scanner plugins
    • +
    +
  6. +
+

Disabling, Unloading and Ignoring plugins

+
    +
  1. Change the <prefix>_RUN Setting to disabled if you want to disable the plugin, but keep the settings
  2. +
  3. If you want to speed up the application, you can unload the plugin by unselecting it in the LOADED_PLUGINS setting.
      +
    • Careful, once you save the Settings Unloaded plugin settings will be lost (old app.conf files are kept in the /config folder)
    • +
    +
  4. +
  5. You can completely ignore plugins by placing a ignore_plugin file into the plugin directory. Ignored plugins won't show up in the LOADED_PLUGINS setting.
  6. +
+

🆕 Developing new custom plugins

+

If you want to develop a custom plugin, please read this Plugin development guide.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/PLUGINS_DEV/index.html b/PLUGINS_DEV/index.html new file mode 100644 index 00000000..cb90b9d9 --- /dev/null +++ b/PLUGINS_DEV/index.html @@ -0,0 +1,5248 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Custom Plugins - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Creating a custom plugin

+

NetAlertX comes with a plugin system to feed events from third-party scripts into the UI and then send notifications, if desired. The highlighted core functionality this plugin system supports, is:

+
    +
  • dynamic creation of a simple UI to interact with the discovered objects,
  • +
  • filtering of displayed values in the Devices UI
  • +
  • surface settings of plugins in the UI,
  • +
  • different column types for reported values to e.g. link back to a device
  • +
  • import objects into existing NetAlertX database tables
  • +
+
+

(Currently, update/overwriting of existing objects is only supported for devices via the CurrentScan table.)

+
+
+

Note

+

For a high-level overview of how the config.json is used and it's lifecycle check the config.json Lifecycle in NetAlertX Guide.

+
+

🎥 Watch the video:

+
+

Tip

+

Read this guide Development environment setup guide to set up your local environment for development. 👩‍💻

+
+

Watch the video

+

📸 Screenshots

+ + + + + + + + + + + + + + + +
Screen 1Screen 2Screen 3
Screen 4Screen 5
+

Use cases

+

Example use cases for plugins could be:

+
    +
  • Monitor a web service and alert me if it's down
  • +
  • Import devices from dhcp.leases files instead/complementary to using PiHole or arp-scans
  • +
  • Creating ad-hoc UI tables from existing data in the NetAlertX database, e.g. to show all open ports on devices, to list devices that disconnected in the last hour, etc.
  • +
  • Using other device discovery methods on the network and importing the data as new devices
  • +
  • Creating a script to create FAKE devices based on user input via custom settings
  • +
  • ...at this point the limitation is mostly the creativity rather than the capability (there might be edge cases and a need to support more form controls for user input off custom settings, but you probably get the idea)
  • +
+

If you wish to develop a plugin, please check the existing plugin structure. Once the settings are saved by the user they need to be removed from the app.conf file manually if you want to re-initialize them from the config.json of the plugin.

+

⚠ Disclaimer

+

Please read the below carefully if you'd like to contribute with a plugin yourself. This documentation file might be outdated, so double-check the sample plugins as well.

+

Plugin file structure overview

+
+

⚠️Folder name must be the same as the code name value in: "code_name": "<value>" +Unique prefix needs to be unique compared to the other settings prefixes, e.g.: the prefix APPRISE is already in use.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FileRequired (plugin type)Description
config.jsonyesContains the plugin configuration (manifest) including the settings available to the user.
script.pynoThe Python script itself. You may call any valid linux command.
last_result.<prefix>.lognoThe file used to interface between NetAlertX and the plugin. Required for a script plugin if you want to feed data into the app. Stored in the /api/log/plugins/
script.lognoLogging output (recommended)
README.mdyesAny setup considerations or overview
+

More on specifics below.

+

Column order and values (plugins interface contract)

+
+

Important

+

Spend some time reading and trying to understand the below table. This is the interface between the Plugins and the core application. The application expets 9 or 13 values The first 9 values are mandatory. The next 4 values (HelpVal1 to HelpVal4) are optional. However, if you use any of these optional values (e.g., HelpVal1), you need to supply all optional values (e.g., HelpVal2, HelpVal3, and HelpVal4). If a value is not used, it should be padded with null.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OrderRepresented ColumnValue RequiredDescription
0Object_PrimaryIDyesThe primary ID used to group Events under.
1Object_SecondaryIDnoOptional secondary ID to create a relationship beween other entities, such as a MAC address
2DateTimeyesWhen the event occured in the format 2023-01-02 15:56:30
3Watched_Value1yesA value that is watched and users can receive notifications if it changed compared to the previously saved entry. For example IP address
4Watched_Value2noAs above
5Watched_Value3noAs above
6Watched_Value4noAs above
7ExtranoAny other data you want to pass and display in NetAlertX and the notifications
8ForeignKeynoA foreign key that can be used to link to the parent object (usually a MAC address)
9HelpVal1no(optional) A helper value
10HelpVal2no(optional) A helper value
11HelpVal3no(optional) A helper value
12HelpVal4no(optional) A helper value
+
+

Note

+

De-duplication is run once an hour on the Plugins_Objects database table and duplicate entries with the same value in columns Object_PrimaryID, Object_SecondaryID, Plugin (auto-filled based on unique_prefix of the plugin), UserData (can be populated with the "type": "textbox_save" column type) are removed.

+
+

config.json structure

+

The config.json file is the manifest of the plugin. It contains mainly settings definitions and the mapping of Plugin objects to NetAlertX objects.

+

Execution order

+

The execution order is used to specify when a plugin is executed. This is useful if a plugin has access and surfaces more information than others. If a device is detected by 2 plugins and inserted into the CurrentScan table, the plugin with the higher priority (e.g.: Level_0 is a higher priority than Level_1) will insert it's values first. These values (devices) will be then prioritized over any values inserted later.

+
{
+    "execution_order" : "Layer_0"
+}
+
+

Supported data sources

+

Currently, these data sources are supported (valid data_source value).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Namedata_source valueNeeds to return a "table"*Overview (more details on this page below)
ScriptscriptnoExecutes any linux command in the CMD setting.
NetAlertX DB queryapp-db-queryyesExecutes a SQL query on the NetAlertX database in the CMD setting.
TemplatetemplatenoUsed to generate internal settings, such as default values.
External SQLite DB querysqlite-db-queryyesExecutes a SQL query from the CMD setting on an external SQLite database mapped in the DB_PATH setting.
Plugin typeplugin_typenoSpecifies the type of the plugin and in which section the Plugin settings are displayed ( one of general/system/scanner/other/publisher ).
+
+
    +
  • "Needs to return a "table" means that the application expects a last_result.<prefix>.log file with some results. It's not a blocker, however warnings in the app.log might be logged.
  • +
+

🔎Example +json +"data_source": "app-db-query" +If you want to display plugin objects or import devices into the app, data sources have to return a "table" of the exact structure as outlined above.

+
+

You can show or hide the UI on the "Plugins" page and "Plugins" tab for a plugin on devices via the show_ui property:

+
+

🔎Example +json +"show_ui": true,

+
+

"data_source": "script"

+

If the data_source is set to script the CMD setting (that you specify in the settings array section in the config.json) contains an executable Linux command, that usually generates a last_result.<prefix>.log file (not required if you don't import any data into the app). The last_result.<prefix>.log file needs to be saved in /api/log/plugins.

+
+

Important

+

A lot of the work is taken care of by the plugin_helper.py library. You don't need to manage the last_result.<prefix>.log file if using the helper objects. Check other script.py of other plugins for details.

+
+

The content of the last_result.<prefix>.log file needs to contain the columns as defined in the "Column order and values" section above. The order of columns can't be changed. After every scan it should contain only the results from the latest scan/execution.

+
    +
  • The format of the last_result.<prefix>.log is a csv-like file with the pipe | as a separator.
  • +
  • 9 (nine) values need to be supplied, so every line needs to contain 8 pipe separators. Empty values are represented by null.
  • +
  • Don't render "headers" for these "columns". +Every scan result/event entry needs to be on a new line.
  • +
  • You can find which "columns" need to be present, and if the value is required or optional, in the "Column order and values" section.
  • +
  • The order of these "columns" can't be changed.
  • +
+

🔎 last_result.prefix.log examples

+

Valid CSV:

+

+https://www.google.com|null|2023-01-02 15:56:30|200|0.7898|null|null|null|null
+https://www.duckduckgo.com|192.168.0.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|ff:ee:ff:11:ff:11
+
+
+

Invalid CSV with different errors on each line:

+

+https://www.google.com|null|2023-01-02 15:56:30|200|0.7898||null|null|null
+https://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|
+|https://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|null
+null|192.168.1.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine
+https://www.duckduckgo.com|192.168.1.1|2023-01-02 15:56:30|null|0.9898|null|null|Best search engine
+https://www.google.com|null|2023-01-02 15:56:30|200|0.7898|||
+https://www.google.com|null|2023-01-02 15:56:30|200|0.7898|
+
+
+

"data_source": "app-db-query"

+

If the data_source is set to app-db-query, the CMD setting needs to contain a SQL query rendering the columns as defined in the "Column order and values" section above. The order of columns is important.

+

This SQL query is executed on the app.db SQLite database file.

+
+

🔎Example

+

SQL query example:

+

SQL +SELECT dv.devName as Object_PrimaryID, + cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, + datetime() as DateTime, + ns.Service as Watched_Value1, + ns.State as Watched_Value2, + 'null' as Watched_Value3, + 'null' as Watched_Value4, + ns.Extra as Extra, + dv.devMac as ForeignKey +FROM + (SELECT * FROM Nmap_Scan) ns +LEFT JOIN + (SELECT devName, devMac, devLastIP FROM Devices) dv +ON ns.MAC = dv.devMac

+

Required CMD setting example with above query (you can set "type": "label" if you want it to make uneditable in the UI):

+

json +{ + "function": "CMD", + "type": {"dataType":"string", "elements": [{"elementType" : "input", "elementOptions" : [] ,"transformers": []}]}, + "default_value":"SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, 'null' as Watched_Value3, 'null' as Watched_Value4, ns.Extra as Extra FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac", + "options": [], + "localized": ["name", "description"], + "name" : [{ + "language_code":"en_us", + "string" : "SQL to run" + }], + "description": [{ + "language_code":"en_us", + "string" : "This SQL query is used to populate the coresponding UI tables under the Plugins section." + }] + }

+
+

"data_source": "template"

+

In most cases, it is used to initialize settings. Check the newdev_template plugin for details.

+

"data_source": "sqlite-db-query"

+

You can execute a SQL query on an external database connected to the current NetAlertX database via a temporary EXTERNAL_<unique prefix>. prefix.

+

For example for PIHOLE ("unique_prefix": "PIHOLE") it is EXTERNAL_PIHOLE.. The external SQLite database file has to be mapped in the container to the path specified in the DB_PATH setting:

+
+

🔎Example

+

json + ... +{ + "function": "DB_PATH", + "type": {"dataType":"string", "elements": [{"elementType" : "input", "elementOptions" : [{"readonly": "true"}] ,"transformers": []}]}, + "default_value":"/etc/pihole/pihole-FTL.db", + "options": [], + "localized": ["name", "description"], + "name" : [{ + "language_code":"en_us", + "string" : "DB Path" + }], + "description": [{ + "language_code":"en_us", + "string" : "Required setting for the <code>sqlite-db-query</code> plugin type. Is used to mount an external SQLite database and execute the SQL query stored in the <code>CMD</code> setting." + }] + } + ...

+
+

The actual SQL query you want to execute is then stored as a CMD setting, similar to a Plugin of the app-db-query plugin type. The format has to adhere to the format outlined in the "Column order and values" section above.

+
+

🔎Example

+

Notice the EXTERNAL_PIHOLE. prefix.

+

json +{ + "function": "CMD", + "type": {"dataType":"string", "elements": [{"elementType" : "input", "elementOptions" : [] ,"transformers": []}]}, + "default_value":"SELECT hwaddr as Object_PrimaryID, cast('http://' || (SELECT ip FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as VARCHAR(100)) || ':' || cast( SUBSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), 0, INSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, macVendor as Watched_Value1, lastQuery as Watched_Value2, (SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as Watched_Value3, 'null' as Watched_Value4, '' as Extra, hwaddr as ForeignKey FROM EXTERNAL_PIHOLE.network WHERE hwaddr NOT LIKE 'ip-%' AND hwaddr <> '00:00:00:00:00:00'; ", + "options": [], + "localized": ["name", "description"], + "name" : [{ + "language_code":"en_us", + "string" : "SQL to run" + }], + "description": [{ + "language_code":"en_us", + "string" : "This SQL query is used to populate the coresponding UI tables under the Plugins section. This particular one selects data from a mapped PiHole SQLite database and maps it to the corresponding Plugin columns." + }] + }

+
+

🕳 Filters

+

Plugin entries can be filtered in the UI based on values entered into filter fields. The txtMacFilter textbox/field contains the Mac address of the currently viewed device, or simply a Mac address that's available in the mac query string (<url>?mac=aa:22:aa:22:aa:22:aa).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyRequiredDescription
compare_columnyesPlugin column name that's value is used for comparison (Left side of the equation)
compare_operatoryesJavaScript comparison operator
compare_field_idyesThe id of a input text field containing a value is used for comparison (Right side of the equation)
compare_js_templateyesJavaScript code used to convert left and right side of the equation. {value} is replaced with input values.
compare_use_quotesyesIf true then the end result of the compare_js_template i swrapped in " quotes. Use to compare strings.
+

Filters are only applied if a filter is specified, and the txtMacFilter is not undefined, or empty (--).

+
+

🔎Example:

+

json + "data_filters": [ + { + "compare_column" : "Object_PrimaryID", + "compare_operator" : "==", + "compare_field_id": "txtMacFilter", + "compare_js_template": "'{value}'.toString()", + "compare_use_quotes": true + } + ],

+
    +
  1. On the pluginsCore.php page is an input field with the id txtMacFilter:
  2. +
+

html +<input class="form-control" id="txtMacFilter" type="text" value="--">

+
    +
  1. +

    This input field is initialized via the &mac= query string.

    +
  2. +
  3. +

    The app then proceeds to use this Mac value from this field and compares it to the value of the Object_PrimaryID database field. The compare_operator is ==.

    +
  4. +
  5. +

    Both values, from the database field Object_PrimaryID and from the txtMacFilter are wrapped and evaluated with the compare_js_template, that is '{value}.toString()'.

    +
  6. +
  7. +

    compare_use_quotes is set to true so '{value}'.toString() is wrappe dinto " quotes.

    +
  8. +
  9. +

    This results in for example this code:

    +
  10. +
+

javascript + // left part of the expression coming from compare_column and right from the input field + // notice the added quotes ()") around the left and right part of teh expression + "eval('ac:82:ac:82:ac:82".toString()')" == "eval('ac:82:ac:82:ac:82".toString()')" +

+
+

🗺 Mapping the plugin results into a database table

+

Plugin results are always inserted into the standard Plugin_Objects database table. Optionally, NetAlertX can take the results of the plugin execution, and insert these results into an additional database table. This is enabled by with the property "mapped_to_table" in the config.json file. The mapping of the columns is defined in the database_column_definitions array.

+
+

Note

+

If results are mapped to the CurrentScan table, the data is then included into the regular scan loop, so for example notification for devices are sent out.

+
+
+

🔍 Example:

+

For example, this approach is used to implement the DHCPLSS plugin. The script parses all supplied "dhcp.leases" files, gets the results in the generic table format outlined in the "Column order and values" section above, takes individual values, and inserts them into the CurrentScan database table in the NetAlertX database. All this is achieved by:

+
    +
  1. Specifying the database table into which the results are inserted by defining "mapped_to_table": "CurrentScan" in the root of the config.json file as shown below:
  2. +
+

json +{ + "code_name": "dhcp_leases", + "unique_prefix": "DHCPLSS", + ... + "data_source": "script", + "localized": ["display_name", "description", "icon"], + "mapped_to_table": "CurrentScan", + ... +} +2. Defining the target column with the mapped_to_column property for individual columns in the database_column_definitions array of the config.json file. For example in the DHCPLSS plugin, I needed to map the value of the Object_PrimaryID column returned by the plugin, to the cur_MAC column in the NetAlertX database table CurrentScan. Notice the "mapped_to_column": "cur_MAC" key-value pair in the sample below.

+

json +{ + "column": "Object_PrimaryID", + "mapped_to_column": "cur_MAC", + "css_classes": "col-sm-2", + "show": true, + "type": "device_mac", + "default_value":"", + "options": [], + "localized": ["name"], + "name":[{ + "language_code":"en_us", + "string" : "MAC address" + }] + }

+
    +
  1. That's it. The app takes care of the rest. It loops thru the objects discovered by the plugin, takes the results line-by-line, and inserts them into the database table specified in "mapped_to_table". The columns are translated from the generic plugin columns to the target table columns via the "mapped_to_column" property in the column definitions.
  2. +
+
+
+

Note

+

You can create a column mapping with a default value via the mapped_to_column_data property. This means that the value of the given column will always be this value. That also means that the "column": "NameDoesntMatter" is not important as there is no database source column.

+
+
+

🔍 Example:

+

json +{ + "column": "NameDoesntMatter", + "mapped_to_column": "cur_ScanMethod", + "mapped_to_column_data": { + "value": "DHCPLSS" + }, + "css_classes": "col-sm-2", + "show": true, + "type": "device_mac", + "default_value":"", + "options": [], + "localized": ["name"], + "name":[{ + "language_code":"en_us", + "string" : "MAC address" + }] + }

+
+

params

+
+

Important

+

An esier way to access settings in scripts is the get_setting_value method. +```python +from helper import get_setting_value

+

... + NTFY_TOPIC = get_setting_value('NTFY_TOPIC') + ...

+

```

+
+

The params array in the config.json is used to enable the user to change the parameters of the executed script. For example, the user wants to monitor a specific URL.

+
+

🔎 Example: +Passing user-defined settings to a command. Let's say, you want to have a script, that is called with a user-defined parameter called urls:

+

bash +root@server# python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com

+
+
    +
  • You can allow the user to add URLs to a setting with the function property set to a custom name, such as urls_to_check (this is not a reserved name from the section "Supported settings function values" below).
  • +
  • You specify the parameter urls in the params section of the config.json the following way (WEBMON_ is the plugin prefix automatically added to all the settings):
  • +
+
{
+    "params" : [
+        {
+            "name"  : "urls",
+            "type"  : "setting",
+            "value" : "WEBMON_urls_to_check"
+        }]
+}
+
+
    +
  • Then you use this setting as an input parameter for your command in the CMD setting. Notice urls={urls} in the below json:
  • +
+
 {
+            "function": "CMD",
+            "type": {"dataType":"string", "elements": [{"elementType" : "input", "elementOptions" : [] ,"transformers": []}]},
+            "default_value":"python3 /app/front/plugins/website_monitor/script.py urls={urls}",
+            "options": [],
+            "localized": ["name", "description"],
+            "name" : [{
+                "language_code":"en_us",
+                "string" : "Command"
+            }],
+            "description": [{
+                "language_code":"en_us",
+                "string" : "Command to run"
+            }]
+        }
+
+

During script execution, the app will take the command "python3 /app/front/plugins/website_monitor/script.py urls={urls}", take the {urls} wildcard and replace it with the value from the WEBMON_urls_to_check setting. This is because:

+
    +
  1. The app checks the params entries
  2. +
  3. It finds "name" : "urls"
  4. +
  5. Checks the type of the urls params and finds "type" : "setting"
  6. +
  7. Gets the setting name from "value" : "WEBMON_urls_to_check"
  8. +
  9. IMPORTANT: in the config.json this setting is identified by "function":"urls_to_check", not "function":"WEBMON_urls_to_check"
  10. +
  11. You can also use a global setting, or a setting from a different plugin
  12. +
  13. The app gets the user defined value from the setting with the code name WEBMON_urls_to_check
  14. +
  15. let's say the setting with the code name WEBMON_urls_to_check contains 2 values entered by the user:
  16. +
  17. WEBMON_urls_to_check=['https://google.com','https://duck.com']
  18. +
  19. The app takes the value from WEBMON_urls_to_check and replaces the {urls} wildcard in the setting where "function":"CMD", so you go from:
  20. +
  21. python3 /app/front/plugins/website_monitor/script.py urls={urls}
  22. +
  23. to
  24. +
  25. python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com
  26. +
+

Below are some general additional notes, when defining params:

+
    +
  • "name":"name_value" - is used as a wildcard replacement in the CMD setting value by using curly brackets {name_value}. The wildcard is replaced by the result of the "value" : "param_value" and "type":"type_value" combo configuration below.
  • +
  • "type":"<sql|setting>" - is used to specify the type of the params, currently only 2 supported (sql,setting).
  • +
  • "type":"sql" - will execute the SQL query specified in the value property. The sql query needs to return only one column. The column is flattened and separated by commas (,), e.g: SELECT devMac from DEVICES -> Internet,74:ac:74:ac:74:ac,44:44:74:ac:74:ac. This is then used to replace the wildcards in the CMD setting.
  • +
  • "type":"setting" - The setting code name. A combination of the value from unique_prefix + _ + function value, or otherwise the code name you can find in the Settings page under the Setting display name, e.g. PIHOLE_RUN.
  • +
  • "value": "param_value" - Needs to contain a setting code name or SQL query without wildcards.
  • +
  • "timeoutMultiplier" : true - used to indicate if the value should multiply the max timeout for the whole script run by the number of values in the given parameter.
  • +
  • "base64": true - use base64 encoding to pass the value to the script (e.g. if there are spaces)
  • +
+
+

🔎Example:

+

json +{ + "params" : [{ + "name" : "ips", + "type" : "sql", + "value" : "SELECT devLastIP from DEVICES", + "timeoutMultiplier" : true + }, + { + "name" : "macs", + "type" : "sql", + "value" : "SELECT devMac from DEVICES" + }, + { + "name" : "timeout", + "type" : "setting", + "value" : "NMAP_RUN_TIMEOUT" + }, + { + "name" : "args", + "type" : "setting", + "value" : "NMAP_ARGS", + "base64" : true + }] +}

+
+

⚙ Setting object structure

+
+

Note

+

The settings flow and when Plugin specific settings are applied is described under the Settings system.

+
+

Required attributes are:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyDescription
"function"Specifies the function the setting drives or a simple unique code name. See Supported settings function values for options.
"type"Specifies the form control used for the setting displayed in the Settings page and what values are accepted. Supported options include:
- {"dataType":"string", "elements": [{"elementType" : "input", "elementOptions" : [{"type":"password"}] ,"transformers": ["sha256"]}]}
"localized"A list of properties on the current JSON level that need to be localized.
"name"Displayed on the Settings page. An array of localized strings. See Localized strings below.
"description"Displayed on the Settings page. An array of localized strings. See Localized strings below.
(optional) "events"Specifies whether to generate an execution button next to the input field of the setting. Supported values:
- "test" - For notification plugins testing
- "run" - Regular plugins testing
(optional) "override_value"Used to determine a user-defined override for the setting. Useful for template-based plugins, where you can choose to leave the current value or override it with the value defined in the setting. (Work in progress)
(optional) "events"Used to trigger the plugin. Usually used on the RUN setting. Not fully tested in all scenarios. Will show a play button next to the setting. After clicking, an event is generated for the backend in the Parameters database table to process the front-end event on the next run.
+

UI Component Types Documentation

+

This section outlines the structure and types of UI components, primarily used to build HTML forms or interactive elements dynamically. Each UI component has a "type" which defines its structure, behavior, and rendering options.

+

UI Component JSON Structure

+

The UI component is defined as a JSON object containing a list of elements. Each element specifies how it should behave, with properties like elementType, elementOptions, and any associated transformers to modify the data. The example below demonstrates how a component with two elements (span and select) is structured:

+
{
+      "function": "devIcon",
+      "type": {
+        "dataType": "string",
+        "elements": [
+          {
+            "elementType": "span",
+            "elementOptions": [
+              { "cssClasses": "input-group-addon iconPreview" },
+              { "getStringKey": "Gen_SelectToPreview" },
+              { "customId": "NEWDEV_devIcon_preview" }
+            ],
+            "transformers": []
+          },
+          {
+            "elementType": "select",
+            "elementHasInputValue": 1,
+            "elementOptions": [
+              { "cssClasses": "col-xs-12" },
+              {
+                "onChange": "updateIconPreview(this)"
+              },
+              { "customParams": "NEWDEV_devIcon,NEWDEV_devIcon_preview" }
+            ],
+            "transformers": []
+          }          
+        ]
+      }
+}
+
+
+

Rendering Logic

+

The code snippet provided demonstrates how the elements are iterated over to generate their corresponding HTML. Depending on the elementType, different HTML tags (like <select>, <input>, <textarea>, <button>, etc.) are created with the respective attributes such as onChange, my-data-type, and class based on the provided elementOptions. Events can also be attached to elements like buttons or select inputs.

+

Key Element Types

+
    +
  • select: Renders a dropdown list. Additional options like isMultiSelect and event handlers (e.g., onChange) can be attached.
  • +
  • input: Handles various types of input fields, including checkboxes, text, and others, with customizable attributes like readOnly, placeholder, etc.
  • +
  • button: Generates clickable buttons with custom event handlers (onClick), icons, or labels.
  • +
  • textarea: Creates a multi-line input box for text input.
  • +
  • span: Used for inline text or content with customizable classes and data attributes.
  • +
+

Each element may also have associated events (e.g., running a scan or triggering a notification) defined under Events.

+
Supported settings function values
+

You can have any "function": "my_custom_name" custom name, however, the ones listed below have a specific functionality attached to them.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SettingDescription
RUN(required) Specifies when the service is executed.
Supported Options:
- "disabled" - do not run
- "once" - run on app start or on settings saved
- "schedule" - if included, then a RUN_SCHD setting needs to be specified to determine the schedule
- "always_after_scan" - run always after a scan is finished
- "before_name_updates" - run before device names are updated (for name discovery plugins)
- "on_new_device" - run when a new device is detected
- "before_config_save" - run before the config is marked as saved. Useful if your plugin needs to modify the app.conf file.
RUN_SCHD(required if you include "schedule" in the above RUN function) Cron-like scheduling is used if the RUN setting is set to schedule.
CMD(required) Specifies the command that should be executed.
API_SQL(not implemented) Generates a table_ + code_name + .json file as per API docs.
RUN_TIMEOUT(optional) Specifies the maximum execution time of the script. If not specified, a default value of 10 seconds is used to prevent hanging.
WATCH(optional) Specifies which database columns are watched for changes for this particular plugin. If not specified, no notifications are sent.
REPORT_ON(optional) Specifies when to send a notification. Supported options are:
- new means a new unique (unique combination of PrimaryId and SecondaryId) object was discovered.
- watched-changed - means that selected Watched_ValueN columns changed
- watched-not-changed - reports even on events where selected Watched_ValueN did not change
- missing-in-last-scan - if the object is missing compared to previous scans
+
+

🔎 Example:

+

json +{ + "function": "RUN", + "type": {"dataType":"string", "elements": [{"elementType" : "select", "elementOptions" : [] ,"transformers": []}]}, + "default_value":"disabled", + "options": ["disabled", "once", "schedule", "always_after_scan", "on_new_device"], + "localized": ["name", "description"], + "name" :[{ + "language_code":"en_us", + "string" : "When to run" + }], + "description": [{ + "language_code":"en_us", + "string" : "Enable a regular scan of your services. If you select <code>schedule</code> the scheduling settings from below are applied. If you select <code>once</code> the scan is run only once on start of the application (container) for the time specified in <a href=\"#WEBMON_RUN_TIMEOUT\"><code>WEBMON_RUN_TIMEOUT</code> setting</a>." + }] +}

+
+
🌍Localized strings
+
    +
  • "language_code":"<en_us|es_es|de_de>" - code name of the language string. Only these three are currently supported. At least the "language_code":"en_us" variant has to be defined.
  • +
  • "string" - The string to be displayed in the given language.
  • +
+
+

🔎 Example:

+

```json

+
{
+    "language_code":"en_us",
+    "string" : "When to run"
+}
+
+

```

+
+
UI settings in database_column_definitions
+

The UI will adjust how columns are displayed in the UI based on the resolvers definition of the database_column_definitions object. These are the supported form controls and related functionality:

+
    +
  • Only columns with "show": true and also with at least an English translation will be shown in the UI.
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Supported TypesDescription
labelDisplays a column only.
textarea_readonlyGenerates a read only text area and cleans up the text to display it somewhat formatted with new lines preserved.
See below for information on threshold, replace.
options PropertyUsed in conjunction with types like threshold, replace, regex.
options_params PropertyUsed in conjunction with a "options": "[{value}]" template and text.select/list.select. Can specify SQL query (needs to return 2 columns SELECT devName as name, devMac as id) or Setting (not tested) to populate the dropdown. Check example below or have a look at the NEWDEV plugin config.json file.
thresholdThe options array contains objects ordered from the lowest maximum to the highest. The corresponding hexColor is used for the value background color if it's less than the specified maximum but more than the previous one in the options array.
replaceThe options array contains objects with an equals property, which is compared to the "value." If the values are the same, the string in replacement is displayed in the UI instead of the actual "value".
regexApplies a regex to the value. The options array contains objects with an type (must be set to regex) and param (must contain the regex itself) property.
Type Definitions
device_macThe value is considered to be a MAC address, and a link pointing to the device with the given MAC address is generated.
device_ipThe value is considered to be an IP address. A link pointing to the device with the given IP is generated. The IP is checked against the last detected IP address and translated into a MAC address, which is then used for the link itself.
device_name_macThe value is considered to be a MAC address, and a link pointing to the device with the given MAC is generated. The link label is resolved as the target device name.
urlThe value is considered to be a URL, so a link is generated.
textbox_saveGenerates an editable and saveable text box that saves values in the database. Primarily intended for the UserData database column in the Plugins_Objects table.
url_http_httpsGenerates two links with the https and http prefix as lock icons.
evalEvaluates as JavaScript. Use the variable value to use the given column value as input (e.g. '<b>${value}<b>' (replace ' with ` in your code) )
+
+

Note

+

Supports chaining. You can chain multiple resolvers with .. For example regex.url_http_https. This will apply the regex resolver and then the url_http_https resolver.

+
+
        "function": "devType",
+        "type": {"dataType":"string", "elements": [{"elementType" : "select", "elementOptions" : [] ,"transformers": []}]},
+        "maxLength": 30,
+        "default_value": "",
+        "options": ["{value}"],
+        "options_params" : [
+            {
+                "name"  : "value",
+                "type"  : "sql",
+                "value" : "SELECT '' as id, '' as name UNION SELECT devType as id, devType as name FROM (SELECT devType FROM Devices UNION SELECT 'Smartphone' UNION SELECT 'Tablet' UNION SELECT 'Laptop' UNION SELECT 'PC' UNION SELECT 'Printer' UNION SELECT 'Server' UNION SELECT 'NAS' UNION SELECT 'Domotic' UNION SELECT 'Game Console' UNION SELECT 'SmartTV' UNION SELECT 'Clock' UNION SELECT 'House Appliance' UNION SELECT 'Phone' UNION SELECT 'AP' UNION SELECT 'Gateway' UNION SELECT 'Firewall' UNION SELECT 'Switch' UNION SELECT 'WLAN' UNION SELECT 'Router' UNION SELECT 'Other') AS all_devices ORDER BY id;"
+            },
+            {
+                "name"  : "uilang",
+                "type"  : "setting",
+                "value" : "UI_LANG"
+            }
+        ]
+
+
{
+            "column": "Watched_Value1",
+            "css_classes": "col-sm-2",
+            "show": true,
+            "type": "threshold",            
+            "default_value":"",
+            "options": [
+                {
+                    "maximum": 199,
+                    "hexColor": "#792D86"                
+                },
+                {
+                    "maximum": 299,
+                    "hexColor": "#5B862D"
+                },
+                {
+                    "maximum": 399,
+                    "hexColor": "#7D862D"
+                },
+                {
+                    "maximum": 499,
+                    "hexColor": "#BF6440"
+                },
+                {
+                    "maximum": 599,
+                    "hexColor": "#D33115"
+                }
+            ],
+            "localized": ["name"],
+            "name":[{
+                "language_code":"en_us",
+                "string" : "Status code"
+                }]
+        },        
+        {
+            "column": "Status",
+            "show": true,
+            "type": "replace",            
+            "default_value":"",
+            "options": [
+                {
+                    "equals": "watched-not-changed",
+                    "replacement": "<i class='fa-solid fa-square-check'></i>"
+                },
+                {
+                    "equals": "watched-changed",
+                    "replacement": "<i class='fa-solid fa-triangle-exclamation'></i>"
+                },
+                {
+                    "equals": "new",
+                    "replacement": "<i class='fa-solid fa-circle-plus'></i>"
+                }
+            ],
+            "localized": ["name"],
+            "name":[{
+                "language_code":"en_us",
+                "string" : "Status"
+                }]
+        },
+        {
+            "column": "Watched_Value3",
+            "css_classes": "col-sm-1",
+            "show": true,
+            "type": "regex.url_http_https",            
+            "default_value":"",
+            "options": [
+                {
+                    "type": "regex",
+                    "param": "([\\d.:]+)"
+                }          
+            ],
+            "localized": ["name"],
+            "name":[{
+                "language_code":"en_us",
+                "string" : "HTTP/s links"
+                },
+                {
+                "language_code":"es_es",
+                "string" : "N/A"
+                }]
+        }
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/PLUGINS_DEV_CONFIG/index.html b/PLUGINS_DEV_CONFIG/index.html new file mode 100644 index 00000000..3a96a74f --- /dev/null +++ b/PLUGINS_DEV_CONFIG/index.html @@ -0,0 +1,4763 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Plugin Config - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Plugins Implementation Details

+

Plugins provide data to the NetAlertX core, which processes it to detect changes, discover new devices, raise alerts, and apply heuristics.

+
+

Overview: Plugin Data Flow

+
    +
  1. Each plugin runs on a defined schedule.
  2. +
  3. Aligning all plugin schedules is recommended so they execute in the same loop.
  4. +
  5. During execution, all plugins write their collected data into the CurrentScan table.
  6. +
  7. After all plugins complete, the CurrentScan table is evaluated to detect new devices, changes, and triggers.
  8. +
+

Although plugins run independently, they contribute to the shared CurrentScan table. +To inspect its contents, set LOG_LEVEL=trace and check for the log section:

+
================ CurrentScan table content ================
+
+
+

config.json Lifecycle

+

This section outlines how each plugin’s config.json manifest is read, validated, and used by the core and plugins. +It also describes plugin output expectations and the main plugin categories.

+
+

Tip

+

For detailed schema and examples, see the Plugin Development Guide.

+
+
+

1. Loading

+
    +
  • On startup, the core loads config.json for each plugin.
  • +
  • The file acts as a plugin manifest, defining metadata, runtime configuration, and database mappings.
  • +
+
+

2. Validation

+
    +
  • The core validates required keys (for example, RUN).
  • +
  • Missing or invalid entries may be replaced with defaults or cause the plugin to be disabled.
  • +
+
+

3. Preparation

+
    +
  • Plugin parameters (paths, commands, and options) are prepared for execution.
  • +
  • Database mappings (mapped_to_table, database_column_definitions) are parsed to define how data integrates with the main app.
  • +
+
+

4. Execution

+
    +
  • +

    Plugins may run:

    +
  • +
  • +

    On a fixed schedule.

    +
  • +
  • Once at startup.
  • +
  • After a notification or other trigger.
  • +
  • The scheduler executes plugins according to their interval.
  • +
+
+

5. Parsing

+
    +
  • Plugin output must be pipe-delimited (|).
  • +
  • The core parses each output line following the Plugin Interface Contract, splitting and mapping fields accordingly.
  • +
+
+

6. Mapping

+
    +
  • Parsed fields are inserted into the plugin’s Plugins_* table.
  • +
  • +

    Data can be mapped into other tables (e.g., Devices, CurrentScan) as defined by:

    +
  • +
  • +

    database_column_definitions

    +
  • +
  • mapped_to_table
  • +
+

Example: Object_PrimaryID → devMAC

+
+

6a. Plugin Output Contract

+

All plugins must follow the Plugin Interface Contract defined in PLUGINS_DEV.md. +Output values are pipe-delimited in a fixed order.

+

Identifiers

+
    +
  • Object_PrimaryID and Object_SecondaryID uniquely identify records (for example, MAC|IP).
  • +
+

Watched Values (Watched_Value1–4)

+
    +
  • Used by the core to detect changes between runs.
  • +
  • Changes in these fields can trigger notifications.
  • +
+

Extra Field (Extra)

+
    +
  • Optional additional value.
  • +
  • Stored in the database but not used for alerts.
  • +
+

Helper Values (Helper_Value1–3)

+
    +
  • Optional auxiliary data (for display or plugin logic).
  • +
  • Stored but not alert-triggering.
  • +
+

Mapping

+
    +
  • While the output format is flexible, the plugin’s manifest determines the destination and type of each field.
  • +
+
+

7. Persistence

+
    +
  • Parsed data is upserted into the database.
  • +
  • Conflicts are resolved using the combined key: Object_PrimaryID + Object_SecondaryID.
  • +
+
+

Plugin Categories

+

Plugins fall into several functional categories depending on their purpose and expected outputs.

+

1. Device Discovery Plugins

+
    +
  • Inputs: None, subnet, or discovery API.
  • +
  • Outputs: MAC and IP for new or updated device records in Devices.
  • +
  • Mapping: Required – usually into CurrentScan.
  • +
  • Examples: ARPSCAN, NMAPDEV.
  • +
+
+

2. Device Data Enrichment Plugins

+
    +
  • Inputs: Device identifiers (MAC, IP).
  • +
  • Outputs: Additional metadata (for example, open ports or sensors).
  • +
  • Mapping: Controlled via manifest definitions.
  • +
  • Examples: NMAP, MQTT.
  • +
+
+

3. Name Resolver Plugins

+
    +
  • Inputs: Device identifiers (MAC, IP, hostname`).
  • +
  • Outputs: Updated devName and devFQDN.
  • +
  • Mapping: Typically none.
  • +
  • Note: Adding new resolvers currently requires a core change.
  • +
  • Examples: NBTSCAN, NSLOOKUP.
  • +
+
+

4. Generic Plugins

+
    +
  • Inputs: Custom, based on the plugin logic or script.
  • +
  • Outputs: Data displayed under Integrations → Plugins only.
  • +
  • Mapping: Not required.
  • +
  • Examples: INTRSPD, custom monitoring scripts.
  • +
+
+

5. Configuration-Only Plugins

+
    +
  • Inputs/Outputs: None at runtime.
  • +
  • Purpose: Used for configuration or maintenance tasks.
  • +
  • Examples: MAINT, CSVBCKP.
  • +
+
+

Post-Processing

+

After persistence:

+
    +
  • The core generates notifications for any watched value changes.
  • +
  • The UI updates with new or modified data.
  • +
  • Plugins with UI-enabled data display under Integrations → Plugins.
  • +
+
+

Summary

+

The lifecycle of a plugin configuration is:

+

Load → Validate → Prepare → Execute → Parse → Map → Persist → Post-process

+

Each plugin must:

+
    +
  • Follow the output contract.
  • +
  • Declare its type and expected output structure.
  • +
  • Define mappings and watched values clearly in config.json.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/RANDOM_MAC/index.html b/RANDOM_MAC/index.html new file mode 100644 index 00000000..41ce3760 --- /dev/null +++ b/RANDOM_MAC/index.html @@ -0,0 +1,4146 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Random MAC - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Privacy & Random MAC's

+ + +

Some operating systems incorporate randomize MAC addresses to improve privacy.

+

This functionality allows you to hide the real MAC of the device and assign a random MAC when we connect to WIFI networks.

+

This behavior is especially useful when connecting to WIFI's that we do not know, but it is totally useless when connecting to our own WIFI's or known networks.

+

I recommend disabling this on-device functionality when connecting our devices to our own WIFI's, this way, NetAlertX will be able to identify the device, and it will not identify it as a new device every so often (every time iOS or Android randomizes the MAC).

+

Random MACs are recognized by the characters "2", "6", "A", or "E" as the 2nd character in the Mac address. You can disable specific prefixes to be detected as random MAC addresses by specifying the UI_NOT_RANDOM_MAC setting.

+

Windows

+

windows

+ +

IOS

+

ios

+ +

Android

+

ios

+ + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/REMOTE_NETWORKS/index.html b/REMOTE_NETWORKS/index.html new file mode 100644 index 00000000..63893d5a --- /dev/null +++ b/REMOTE_NETWORKS/index.html @@ -0,0 +1,4167 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Remote Networks - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Scanning Remote or Inaccessible Networks

+

By design, local network scanners such as arp-scan use ARP (Address Resolution Protocol) to map IP addresses to MAC addresses on the local network. Since ARP operates at Layer 2 (Data Link Layer), it typically works only within a single broadcast domain, usually limited to a single router or network segment.

+
+

Note

+

Ping and ARPSCAN use different protocols so even if you can ping devices it doesn't mean ARPSCAN can detect them.

+
+

To scan multiple locally accessible network segments, add them as subnets according to the subnets documentation. If ARPSCAN is not suitable for your setup, read on.

+

Complex Use Cases

+

The following network setups might make some devices undetectable with ARPSCAN. Check the specific setup to understand the cause and find potential workarounds to report on these devices.

+

Wi-Fi Extenders

+

Wi-Fi extenders typically create a separate network or subnet, which can prevent network scanning tools like arp-scan from detecting devices behind the extender.

+
+

Possible workaround: Scan the specific subnet that the extender uses, if it is separate from the main network.

+
+

VPNs

+

ARP operates at Layer 2 (Data Link Layer) and works only within a local area network (LAN). VPNs, which operate at Layer 3 (Network Layer), route traffic between networks, preventing ARP requests from discovering devices outside the local network.

+

VPNs use virtual interfaces (e.g., tun0, tap0) to encapsulate traffic, bypassing ARP-based discovery. Additionally, many VPNs use NAT, which masks individual devices behind a shared IP address.

+
+

Possible workaround: Configure the VPN to bridge networks instead of routing to enable ARP, though this depends on the VPN setup and security requirements.

+
+

Other Workarounds

+

The following workarounds should work for most complex network setups.

+

Supplementing Plugins

+

You can use supplementary plugins that employ alternate methods. Protocols used by the SNMPDSC or DHCPLSS plugins are widely supported on different routers and can be effective as workarounds. Check the plugins list to find a plugin that works with your router and network setup.

+

Multiple NetAlertX Instances

+

If you have servers in different networks, you can set up separate NetAlertX instances on those subnets and synchronize the results into one instance using the SYNC plugin.

+

Manual Entry

+

If you don't need to discover new devices and only need to report on their status (online, offline, down), you can manually enter devices and check their status using the ICMP plugin, which uses the ping command internally.

+

For more information on how to add devices manually (or dummy devices), refer to the Device Management documentation.

+

To create truly dummy devices, you can use a loopback IP address (e.g., 0.0.0.0 or 127.0.0.1) so they appear online.

+

NMAP and Fake MAC Addresses

+

Scanning remote networks with NMAP is possible (via the NMAPDEV plugin), but since it cannot retrieve the MAC address, you need to enable the NMAPDEV_FAKE_MAC setting. This will generate a fake MAC address based on the IP address, allowing you to track devices. However, this can lead to inconsistencies, especially if the IP address changes or a previously logged device is rediscovered. If this setting is disabled, only the IP address will be discovered, and devices with missing MAC addresses will be skipped.

+

Check the NMAPDEV plugin for details

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/REVERSE_DNS/index.html b/REVERSE_DNS/index.html new file mode 100644 index 00000000..8bf69850 --- /dev/null +++ b/REVERSE_DNS/index.html @@ -0,0 +1,4261 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Reverse DNS - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Reverse DNS

+ +

Setting up better name discovery with Reverse DNS

+

If you are running a DNS server, such as AdGuard, set up Private reverse DNS servers for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.

+
+

Tip

+

Before proceeding, ensure that name resolution plugins are enabled. +You can customize how names are cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. +To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

+
+
+

Example 1: Reverse DNS disabled

+

jokob@Synology-NAS:/$ nslookup 192.168.1.58 +** server can't find 58.1.168.192.in-addr.arpa: NXDOMAIN

+

Example 2: Reverse DNS enabled

+

jokob@Synology-NAS:/$ nslookup 192.168.1.58 +45.1.168.192.in-addr.arpa name = jokob-NUC.localdomain.

+
+

Enabling reverse DNS in AdGuard

+
    +
  1. Navigate to Settings -> DNS Settings
  2. +
  3. Locate Private reverse DNS servers
  4. +
  5. Enter your router IP address, such as 192.168.1.1
  6. +
  7. Make sure you have Use private reverse DNS resolvers ticked.
  8. +
  9. Click Apply to save your settings.
  10. +
+

Specifying the DNS in the container

+

You can specify the DNS server in the docker-compose to improve name resolution on your network.

+
services:
+  netalertx:
+    container_name: netalertx
+    image: "ghcr.io/jokob-sk/netalertx:latest"
+...
+    dns:           # specifying the DNS servers used for the container
+      - 10.8.0.1
+      - 10.8.0.17
+
+

Using a custom resolv.conf file

+

You can configure a custom /etc/resolv.conf file in docker-compose.yml and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant resolv.conf man entry for details.

+

docker-compose.yml:

+
version: "3"
+services:
+  netalertx:
+    container_name: netalertx
+    volumes:
+...
+      - /local_data_dir/config/resolv.conf:/etc/resolv.conf                          # ⚠ Mapping the /resolv.conf file for better name resolution
+...
+
+

/local_data_dir/config/resolv.conf:

+

The most important below is the nameserver entry (you can add multiple):

+
nameserver 192.168.178.11
+options edns0 trust-ad
+search example.com
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/REVERSE_PROXY/index.html b/REVERSE_PROXY/index.html new file mode 100644 index 00000000..d994d556 --- /dev/null +++ b/REVERSE_PROXY/index.html @@ -0,0 +1,4821 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Reverse Proxy - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Reverse Proxy Configuration

+
+

Submitted by amazing cvc90 🙏

+
+
+

Note

+

There are various NGINX config files for NetAlertX, some for the bare-metal install, currently Debian 12 and Ubuntu 24 (netalertx.conf), and one for the docker container (netalertx.template.conf).

+

The first one you can find in the respective bare metal installer folder /app/install/\<system\>/netalertx.conf. +The docker one can be found in the install folder. Map, or use, the one appropriate for your setup.

+
+


+

NGINX HTTP Configuration (Direct Path)

+
    +
  1. +

    On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
   server {
+     listen 80;
+     server_name netalertx;
+     proxy_preserve_host on;
+     proxy_pass http://localhost:20211/;
+     proxy_pass_reverse http://localhost:20211/;
+    }
+
+
    +
  1. Activate the new website by running the following command:
  2. +
+

nginx -s reload or systemctl restart nginx

+
    +
  1. +

    Check your config with nginx -t. If there are any issues, it will tell you.

    +
  2. +
  3. +

    Once NGINX restarts, you should be able to access the proxy website at http://netalertx/

    +
  4. +
+


+

NGINX HTTP Configuration (Sub Path)

+
    +
  1. +

    On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
   server {
+     listen 80;
+     server_name netalertx;
+     proxy_preserve_host on;
+     location ^~ /netalertx/ {
+          proxy_pass http://localhost:20211/;
+          proxy_pass_reverse http://localhost:20211/;
+          proxy_redirect ~^/(.*)$ /netalertx/$1;
+          rewrite ^/netalertx/?(.*)$ /$1 break;
+     }
+    }
+
+
    +
  1. +

    Check your config with nginx -t. If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

nginx -s reload or systemctl restart nginx

+
    +
  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
  2. +
+


+

NGINX HTTP Configuration (Sub Path) with module ngx_http_sub_module

+
    +
  1. +

    On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
   server {
+     listen 80;
+     server_name netalertx;
+     proxy_preserve_host on;
+     location ^~ /netalertx/ {
+          proxy_pass http://localhost:20211/;
+          proxy_pass_reverse http://localhost:20211/;
+          proxy_redirect ~^/(.*)$ /netalertx/$1;
+          rewrite ^/netalertx/?(.*)$ /$1 break;
+      sub_filter_once off;
+      sub_filter_types *;
+      sub_filter 'href="/' 'href="/netalertx/';
+      sub_filter '(?>$host)/css' '/netalertx/css';
+      sub_filter '(?>$host)/js'  '/netalertx/js';
+      sub_filter '/img' '/netalertx/img';
+      sub_filter '/lib' '/netalertx/lib';
+      sub_filter '/php' '/netalertx/php';
+     }
+    }
+
+
    +
  1. +

    Check your config with nginx -t. If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

nginx -s reload or systemctl restart nginx

+
    +
  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
  2. +
+


+

NGINX HTTPS Configuration (Direct Path)

+
    +
  1. +

    On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
   server {
+     listen 443;
+     server_name netalertx;
+     SSLEngine On;
+     SSLCertificateFile /etc/ssl/certs/netalertx.pem;
+     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
+     proxy_preserve_host on;
+     proxy_pass http://localhost:20211/;
+     proxy_pass_reverse http://localhost:20211/;
+    }
+
+
    +
  1. +

    Check your config with nginx -t. If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

nginx -s reload or systemctl restart nginx

+
    +
  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/
  2. +
+


+

NGINX HTTPS Configuration (Sub Path)

+
    +
  1. +

    On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
   server {
+     listen 443;
+     server_name netalertx;
+     SSLEngine On;
+     SSLCertificateFile /etc/ssl/certs/netalertx.pem;
+     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
+     location ^~ /netalertx/ {
+          proxy_pass http://localhost:20211/;
+          proxy_pass_reverse http://localhost:20211/;
+          proxy_redirect ~^/(.*)$ /netalertx/$1;
+          rewrite ^/netalertx/?(.*)$ /$1 break;
+     }
+    }
+
+
    +
  1. +

    Check your config with nginx -t. If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

nginx -s reload or systemctl restart nginx

+
    +
  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
  2. +
+


+

NGINX HTTPS Configuration (Sub Path) with module ngx_http_sub_module

+
    +
  1. +

    On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
   server {
+     listen 443;
+     server_name netalertx;
+     SSLEngine On;
+     SSLCertificateFile /etc/ssl/certs/netalertx.pem;
+     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
+     location ^~ /netalertx/ {
+          proxy_pass http://localhost:20211/;
+          proxy_pass_reverse http://localhost:20211/;
+          proxy_redirect ~^/(.*)$ /netalertx/$1;
+          rewrite ^/netalertx/?(.*)$ /$1 break;
+      sub_filter_once off;
+      sub_filter_types *;
+      sub_filter 'href="/' 'href="/netalertx/';
+      sub_filter '(?>$host)/css' '/netalertx/css';
+      sub_filter '(?>$host)/js'  '/netalertx/js';
+      sub_filter '/img' '/netalertx/img';
+      sub_filter '/lib' '/netalertx/lib';
+      sub_filter '/php' '/netalertx/php';
+     }
+    }
+
+
    +
  1. +

    Check your config with nginx -t. If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

nginx -s reload or systemctl restart nginx

+
    +
  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
  2. +
+


+

Apache HTTP Configuration (Direct Path)

+
    +
  1. +

    On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
    <VirtualHost *:80>
+         ServerName netalertx
+         ProxyPreserveHost On
+         ProxyPass / http://localhost:20211/
+         ProxyPassReverse / http://localhost:20211/
+    </VirtualHost>
+
+
    +
  1. +

    Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

a2ensite netalertx or service apache2 reload

+
    +
  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
  2. +
+


+

Apache HTTP Configuration (Sub Path)

+
    +
  1. +

    On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
    <VirtualHost *:80>
+         ServerName netalertx
+         location ^~ /netalertx/ {
+               ProxyPreserveHost On
+               ProxyPass / http://localhost:20211/
+               ProxyPassReverse / http://localhost:20211/
+         }
+    </VirtualHost>
+
+
    +
  1. +

    Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

a2ensite netalertx or service apache2 reload

+
    +
  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
  2. +
+


+

Apache HTTPS Configuration (Direct Path)

+
    +
  1. +

    On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
    <VirtualHost *:443>
+         ServerName netalertx
+         SSLEngine On
+         SSLCertificateFile /etc/ssl/certs/netalertx.pem
+         SSLCertificateKeyFile /etc/ssl/private/netalertx.key
+         ProxyPreserveHost On
+         ProxyPass / http://localhost:20211/
+         ProxyPassReverse / http://localhost:20211/
+    </VirtualHost>
+
+
    +
  1. +

    Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +

    a2ensite netalertx or service apache2 reload

    +
  4. +
  5. +

    Once Apache restarts, you should be able to access the proxy website at https://netalertx/

    +
  6. +
+


+

Apache HTTPS Configuration (Sub Path)

+
    +
  1. +

    On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

    +
  2. +
  3. +

    In this file, paste the following code:

    +
  4. +
+
    <VirtualHost *:443>
+        ServerName netalertx
+        SSLEngine On
+        SSLCertificateFile /etc/ssl/certs/netalertx.pem
+        SSLCertificateKeyFile /etc/ssl/private/netalertx.key
+        location ^~ /netalertx/ {
+              ProxyPreserveHost On
+              ProxyPass / http://localhost:20211/
+              ProxyPassReverse / http://localhost:20211/
+        }
+    </VirtualHost>
+
+
    +
  1. +

    Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

    +
  2. +
  3. +

    Activate the new website by running the following command:

    +
  4. +
+

a2ensite netalertx or service apache2 reload

+
    +
  1. Once Apache restarts, you should be able to access the proxy website at https://netalertx/netalertx/
  2. +
+


+

Reverse proxy example by using LinuxServer's SWAG container.

+
+

Submitted by s33d1ing. 🙏

+
+

linuxserver/swag

+

In the SWAG container create /config/nginx/proxy-confs/netalertx.subfolder.conf with the following contents:

+
## Version 2023/02/05
+# make sure that your netalertx container is named netalertx
+# netalertx does not require a base url setting
+
+# Since NetAlertX uses a Host network, you may need to use the IP address of the system running NetAlertX for $upstream_app.
+
+location /netalertx {
+    return 301 $scheme://$host/netalertx/;
+}
+
+location ^~ /netalertx/ {
+    # enable the next two lines for http auth
+    #auth_basic "Restricted";
+    #auth_basic_user_file /config/nginx/.htpasswd;
+
+    # enable for ldap auth (requires ldap-server.conf in the server block)
+    #include /config/nginx/ldap-location.conf;
+
+    # enable for Authelia (requires authelia-server.conf in the server block)
+    #include /config/nginx/authelia-location.conf;
+
+    # enable for Authentik (requires authentik-server.conf in the server block)
+    #include /config/nginx/authentik-location.conf;
+
+    include /config/nginx/proxy.conf;
+    include /config/nginx/resolver.conf;
+
+    set $upstream_app netalertx;
+    set $upstream_port 20211;
+    set $upstream_proto http;
+
+    proxy_pass $upstream_proto://$upstream_app:$upstream_port;
+    proxy_set_header Accept-Encoding "";
+
+    proxy_redirect ~^/(.*)$ /netalertx/$1;
+    rewrite ^/netalertx/?(.*)$ /$1 break;
+
+    sub_filter_once off;
+    sub_filter_types *;
+
+    sub_filter 'href="/' 'href="/netalertx/';
+
+    sub_filter '(?>$host)/css' '/netalertx/css';
+    sub_filter '(?>$host)/js'  '/netalertx/js';
+
+    sub_filter '/img' '/netalertx/img';
+    sub_filter '/lib' '/netalertx/lib';
+    sub_filter '/php' '/netalertx/php';
+}
+
+


+

Traefik

+
+

Submitted by Isegrimm 🙏 (based on this discussion)

+
+

Assuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.

+

Note: Everything in these configs assumes 'www.domain.com' as your domainname and 'section31' as an arbitrary name for your certificate setup. You will have to substitute these with your own.

+

Also, I use the prefix 'netalertx'. If you want to use another prefix, change it in these files: dynamic.toml and default.

+

Content of my yaml-file (this is the generic Traefik config, which defines which ports to listen on, redirect http to https and sets up the certificate process). +It also contains Authelia, which I use for authentication. +This part contains nothing specific to NetAlertX.

+
version: '3.8'
+
+services:
+  traefik:
+    image: traefik
+    container_name: traefik
+    command:
+      - "--api=true"
+      - "--api.insecure=true"
+      - "--api.dashboard=true"
+      - "--entrypoints.web.address=:80"
+      - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
+      - "--entrypoints.web.http.redirections.entryPoint.scheme=https"
+      - "--entrypoints.websecure.address=:443"
+      - "--providers.file.filename=/traefik-config/dynamic.toml"
+      - "--providers.file.watch=true"
+      - "--log.level=ERROR"
+      - "--certificatesresolvers.section31.acme.email=postmaster@domain.com"
+      - "--certificatesresolvers.section31.acme.storage=/traefik-config/acme.json"
+      - "--certificatesresolvers.section31.acme.httpchallenge=true"
+      - "--certificatesresolvers.section31.acme.httpchallenge.entrypoint=web"
+    ports:
+      - "80:80"
+      - "443:443"
+      - "8080:8080"
+    volumes:
+      - "/var/run/docker.sock:/var/run/docker.sock:ro"
+      - /appl/docker/traefik/config:/traefik-config
+    depends_on:
+      - authelia
+    restart: unless-stopped
+  authelia:
+    container_name: authelia
+    image: authelia/authelia:latest
+    ports:
+      - "9091:9091"
+    volumes:
+      - /appl/docker/authelia:/config
+    restart: u
+    nless-stopped
+
+

Snippet of the dynamic.toml file (referenced in the yml-file above) that defines the config for NetAlertX: +The following are self-defined keywords, everything else is traefik keywords: +- netalertx-router +- netalertx-service +- auth +- netalertx-stripprefix

+
[http.routers]
+  [http.routers.netalertx-router]
+    entryPoints = ["websecure"]
+    rule = "Host(`www.domain.com`) && PathPrefix(`/netalertx`)"
+    service = "netalertx-service"
+    middlewares = "auth,netalertx-stripprefix"
+    [http.routers.netalertx-router.tls]
+       certResolver = "section31"
+       [[http.routers.netalertx-router.tls.domains]]
+         main = "www.domain.com"
+
+[http.services]
+  [http.services.netalertx-service]
+    [[http.services.netalertx-service.loadBalancer.servers]]
+      url = "http://internal-ip-address:20211/"
+
+[http.middlewares]
+  [http.middlewares.auth.forwardAuth]
+    address = "http://authelia:9091/api/verify?rd=https://www.domain.com/authelia/"
+    trustForwardHeader = true
+    authResponseHeaders = ["Remote-User", "Remote-Groups", "Remote-Name", "Remote-Email"]
+  [http.middlewares.netalertx-stripprefix.stripprefix]
+    prefixes = "/netalertx"
+    forceSlash = false
+
+
+

To make NetAlertX work with this setup I modified the default file at /etc/nginx/sites-available/default in the docker container by copying it to my local filesystem, adding the changes as specified by cvc90 and mounting the new file into the docker container, overwriting the original one. By mapping the file instead of changing the file in-place, the changes persist if an updated dockerimage is pulled. This is also a downside when the default file is updated, so I only use this as a temporary solution, until the dockerimage is updated with this change.

+

Default-file:

+
server {
+    listen 80 default_server;
+    root /var/www/html;
+    index index.php;
+    #rewrite /netalertx/(.*) / permanent;
+    add_header X-Forwarded-Prefix "/netalertx" always;
+    proxy_set_header X-Forwarded-Prefix "/netalertx";
+
+  location ~* \.php$ {
+    fastcgi_pass unix:/run/php/php8.2-fpm.sock;
+    include         fastcgi_params;
+    fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;
+    fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;
+    fastcgi_connect_timeout 75;
+          fastcgi_send_timeout 600;
+          fastcgi_read_timeout 600;
+  }
+}
+
+

Mapping the updated file (on the local filesystem at /appl/docker/netalertx/default) into the docker container:

+
...
+  volumes:
+    - /appl/docker/netalertx/default:/etc/nginx/sites-available/default
+...
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/SECURITY/index.html b/SECURITY/index.html new file mode 100644 index 00000000..16e87cf7 --- /dev/null +++ b/SECURITY/index.html @@ -0,0 +1,4361 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Security Considerations - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

🧭 Responsibility Disclaimer

+

NetAlertX provides powerful tools for network scanning, presence detection, and automation. However, it is up to you—the deployer—to ensure that your instance is properly secured.

+

This includes (but is not limited to): +- Controlling who has access to the UI and API +- Following network and container security best practices +- Running NetAlertX only on networks where you have legal authorization +- Keeping your deployment up to date with the latest patches

+
+

NetAlertX is not responsible for misuse, misconfiguration, or unsecure deployments. Always test and secure your setup before exposing it to the outside world.

+
+

🔐 Securing Your NetAlertX Instance

+

NetAlertX is a powerful network scanning and automation framework. With that power comes responsibility. It is your responsibility to secure your deployment, especially if you're running it outside a trusted local environment.

+
+

⚠️ TL;DR – Key Security Recommendations

+
    +
  • NEVER expose NetAlertX directly to the internet without protection
  • +
  • ✅ Use a VPN or Tailscale to access remotely
  • +
  • ✅ Enable password protection for the web UI
  • +
  • ✅ Harden your container environment (e.g., no unnecessary privileges)
  • +
  • ✅ Use firewalls and IP whitelisting
  • +
  • ✅ Keep the software updated
  • +
  • ✅ Limit the scope of plugins and API keys
  • +
+
+

🔗 Access Control with VPN (or Tailscale)

+

NetAlertX is designed to be run on private LANs, not the open internet.

+

Recommended: Use a VPN to access NetAlertX from remote locations.

+

✅ Tailscale (Easy VPN Alternative)

+

Tailscale sets up a private mesh network between your devices. It's fast to configure and ideal for NetAlertX.
+👉 Get started with Tailscale

+
+

🔑 Web UI Password Protection

+

By default, NetAlertX does not require login. Before exposing the UI in any way:

+
    +
  1. +

    Enable password protection: + ini + SETPWD_enable_password=true + SETPWD_password=your_secure_password

    +
  2. +
  3. +

    Passwords are stored as SHA256 hashes

    +
  4. +
  5. +

    Default password (if not changed): 123456 — change it ASAP!

    +
  6. +
+
+

To disable authenticated login, set SETPWD_enable_password=false in app.conf

+
+
+

🔥 Additional Security Measures

+
    +
  • +

    Firewall / Network Rules
    + Restrict UI/API access to trusted IPs only.

    +
  • +
  • +

    Limit Docker Capabilities
    + Avoid --privileged. Use --cap-add=NET_RAW and others only if required by your scan method.

    +
  • +
  • +

    Keep NetAlertX Updated
    + Regular updates contain bug fixes and security patches.

    +
  • +
  • +

    Plugin Permissions
    + Disable unused plugins. Only install from trusted sources.

    +
  • +
  • +

    Use Read-Only API Keys
    + When integrating NetAlertX with other tools, scope keys tightly.

    +
  • +
+
+

🧱 Docker Hardening Tips

+
    +
  • Use read-only mount options where possible (:ro)
  • +
  • Avoid running as root unless absolutely necessary
  • +
  • Consider using docker scan or other container image vulnerability scanners
  • +
  • Run with --network host only on trusted networks and only if needed for ARP-based scans
  • +
+
+

📣 Responsible Disclosure

+

If you discover a vulnerability or security concern, please report it privately to:

+

📧 jokob@duck.com

+

We take security seriously and will work to patch confirmed issues promptly. Your help in responsible disclosure is appreciated!

+
+

By following these recommendations, you can ensure your NetAlertX deployment is both powerful and secure.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/SECURITY_FEATURES/index.html b/SECURITY_FEATURES/index.html new file mode 100644 index 00000000..40996371 --- /dev/null +++ b/SECURITY_FEATURES/index.html @@ -0,0 +1,4301 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Security Features - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

NetAlertX Security: A Layered Defense

+

Your network security monitor has the "keys to the kingdom," making it a prime target for attackers. If it gets compromised, the game is over.

+

NetAlertX is engineered from the ground up to prevent this. It's not just an app; it's a purpose-built security appliance. Its core design is built on a zero-trust philosophy, which is a modern way of saying we assume a breach will happen and plan for it. This isn't a single "lock on the door"; it's a "defense-in-depth" strategy, more like a medieval castle with a moat, high walls, and guards at every door.

+

Here’s a breakdown of the defensive layers you get, right out of the box using the default configuration.

+

Feature 1: The "Digital Concrete" Filesystem

+

Methodology: The core application and its system files are treated as immutable. Once built, the app's code is "set in concrete," preventing attackers from modifying it or planting malware.

+
    +
  • +

    Immutable Filesystem: At runtime, the container's entire filesystem is set to read_only: true. The application code, system libraries, and all other files are literally frozen. This single control neutralizes a massive range of common attacks.

    +
  • +
  • +

    "Ownership-as-a-Lock" Pattern: During the build, all system files are assigned to a special readonly user. This user has no login shell and no power to write to any files, even its own. It’s a clever, defense-in-depth locking mechanism.

    +
  • +
  • +

    Data Segregation: All user-specific data (like configurations and the device database) is stored completely outside the container in Docker volumes. The application is disposable; the data is persistent.

    +
  • +
+

What's this mean to you: Even if an attacker gets in, they cannot modify the application code or plant malware. It's like the app is set in digital concrete.

+

Feature 2: Surgical, "Keycard-Only" Access

+

Methodology: The principle of least privilege is strictly enforced. Every process gets only the absolute minimum set of permissions needed for its specific job.

+
    +
  • +

    Non-Privileged Execution: The entire NetAlertX stack runs as a dedicated, low-power, non-root user (netalertx). No "god mode" privileges are available to the application.

    +
  • +
  • +

    Kernel-Level Capability Revocation: The container is launched with cap_drop: - ALL, which tells the Linux kernel to revoke all "root-like" special powers.

    +
  • +
  • +

    Binary-Specific Privileges (setcap): This is the "keycard" metaphor in action. After revoking all powers, the system uses setcap to grant specific, necessary permissions only to the binaries that absolutely require them (like nmap and arp-scan). This means that even if an attacker compromises the web server, they can't start scanning the network. The web server's "keycard" doesn't open the "scanning" door.

    +
  • +
+

What's this mean to you: A security breach is firewalled. An attacker who gets into the web UI does not have the "keycard" to start scanning your network or take over the system. The breach is contained.

+

Feature 3: Attack Surface "Amputation"

+

Methodology: The potential attack surface is aggressively minimized by removing every non-essential tool an attacker would want to use.

+
    +
  • +

    Package Manager Removal: The hardened build stage explicitly deletes the Alpine package manager (apk del apk-tools). This makes it impossible for an attacker to simply apk add their malicious toolkit.

    +
  • +
  • +

    sudo Neutralization: All sudo configurations are removed, and the /usr/bin/sudo command is replaced with a non-functional shim. Any attempt to escalate privileges this way will fail.

    +
  • +
  • +

    Build Toolchain Elimination: The Dockerfile uses a multi-stage build. The initial "builder" stage, which contains all the powerful compilers (gcc) and development tools, is completely discarded. The final production image is lean and contains no build tools.

    +
  • +
  • +

    Minimal User & Group Files: The hardened stage scrubs the system's passwd and group files, removing all default system users to minimize potential avenues for privilege escalation.

    +
  • +
+

What's this mean to you: An attacker who breaks in finds themselves in an empty room with no tools. They have no sudo to get more power, no package manager to download weapons, and no compilers to build new ones.

+

Feature 4: "Self-Cleaning" Writable Areas

+

Methodology: All writable locations are treated as untrusted, temporary, and non-executable by default.

+
    +
  • +

    In-Memory Volatile Storage: The docker-compose.yml configuration maps all temporary directories (e.g., /tmp/log, /tmp/api, /tmp) to in-memory tmpfs filesystems. They do not exist on the host's disk.

    +
  • +
  • +

    Volatile Data: Because these locations exist only in RAM, their contents are instantly and irrevocably erased when the container is stopped. This provides a "self-cleaning" mechanism that purges any attacker-dropped files or payloads on every single restart.

    +
  • +
  • +

    Secure Mount Flags: These in-memory mounts are configured with the noexec flag. This is a critical security control: it prohibits the execution of any binary or script from a location that is writable.

    +
  • +
+

What's this mean to you: Any malicious file an attacker does manage to drop is written in invisible, non-permanent ink. The file is written to RAM, not disk, so it vaporizes the instant you restart the container. Even worse for them, the noexec flag means they can't even run the file in the first place.

+

Feature 5: Built-in Resource Guardrails

+

Methodology: The container is constrained by resource limits to function as a "good citizen" on the host system. This prevents a compromised or runaway process from consuming excessive resources, a common vector for Denial of Service (DoS) attacks.

+
    +
  • +

    Process Limiting: The docker-compose.yml defines a pids_limit: 512. This directly mitigates "fork bomb" attacks, where a process attempts to crash the host by recursively spawning thousands of new processes.

    +
  • +
  • +

    Memory & CPU Limits: The configuration file defines strict resource limits to prevent any single process from exhausting the host's available system resources.

    +
  • +
+

What's this mean to you: NetAlertX is a "good neighbor" and can't be used to crash your host machine. Even if a process is compromised, it's in a digital straitjacket and cannot pull a "denial of service" attack by hogging all your CPU or memory.

+

Feature 6: The "Pre-Flight" Self-Check

+

Methodology: Before any services start, NetAlertX runs a comprehensive "pre-flight" check to ensure its own security and configuration are sound. It's like a built-in auditor who verifies its own defenses.

+
    +
  • +

    Active Self-Diagnosis: On every single boot, NetAlertX runs a series of startup pre-checks—and it's fast. The entire self-check process typically completes in less than a second, letting you get to the web UI in about three seconds from startup.

    +
  • +
  • +

    Validates Its Own Security: These checks actively inspect the other security features. For example, check-0-permissions.sh validates that all the "Digital Concrete" files are locked down and all the "Self-Cleaning" areas are writable, just as they should be. It also checks that the correct netalertx user is running the show, not root.

    +
  • +
  • +

    Catches Misconfigurations: This system acts as a "safety inspector" that catches misconfigurations before they can become security holes. If you've made a mistake in your configuration (like a bad folder permission or incorrect network mode), NetAlertX will tell you in the logs why it can't start, rather than just failing silently.

    +
  • +
+

What's this mean to you: The system is self-aware and checks its own work. You get instant feedback if a setting is wrong, and you get peace of mind on every single boot knowing all these security layers are active and verified, all in about one second.

+

Conclusion: Security by Default

+

No single security control is a silver bullet. The robust security posture of NetAlertX is achieved through defense in depth, layering these methodologies.

+

An adversary must not only gain initial access but must also find a way to write a payload to a non-executable, in-memory location, without access to any standard system tools, sudo, or a package manager. And they must do this while operating as an unprivileged user in a resource-limited environment where the application code is immutable and actively checks its own integrity on every boot.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/SESSION_INFO/index.html b/SESSION_INFO/index.html new file mode 100644 index 00000000..0bdc5988 --- /dev/null +++ b/SESSION_INFO/index.html @@ -0,0 +1,4284 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Session Info - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Sessions Section – Device View

+

The Sessions Section shows a device’s connection history. All data is automatically detected and cannot be edited.

+

Session info

+
+

Key Fields

+ + + + + + + + + + + + + + + + + + + + +
FieldDescriptionEditable?
First ConnectionThe first time the device was detected on the network.❌ Auto-detected
Last ConnectionThe most recent time the device was online.❌ Auto-detected
+
+

How Session Information Works

+

1. Detecting New Devices

+
    +
  • New devices are automatically detected when they first appear on the network.
  • +
  • A New Device record is created, capturing the MAC, IP, vendor, and detection time.
  • +
+

2. Recording Connection Sessions

+
    +
  • Every time a device connects, a session entry is created.
  • +
  • +

    Captured details include:

    +
  • +
  • +

    Connection type (wired or wireless)

    +
  • +
  • Connection time
  • +
  • Device details (MAC, IP, vendor)
  • +
+

3. Handling Missing or Conflicting Data

+
    +
  • +

    Triggers: + Devices are flagged when session data is incomplete, inconsistent, or conflicting. Examples include:

    +
  • +
  • +

    Missing first or last connection timestamps

    +
  • +
  • Overlapping session records
  • +
  • +

    Sessions showing a device as connected and disconnected at the same time

    +
  • +
  • +

    System response:

    +
  • +
  • +

    Automatically highlights affected devices in the Sessions Section.

    +
  • +
  • +

    Attempts to infer missing information from available data, such as:

    +
      +
    • Estimating first or last connection times from nearby session events
    • +
    • Correcting overlapping session periods
    • +
    • Reconciling conflicting connection statuses
    • +
    +
  • +
  • +

    User impact:

    +
  • +
  • +

    Users do not need to manually fix session data.

    +
  • +
  • The system ensures the device’s connection history remains as accurate as possible for monitoring and reporting.
  • +
+

4. Updating Sessions

+
    +
  • Reconnect: Updates session with the new connection timestamp.
  • +
  • Disconnect: Records disconnection time and marks the device as offline.
  • +
+

This session information feeds directly into Monitoring → Presence, providing a live view of which devices are currently online.

+

Monitoring Device Presence

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/SETTINGS_SYSTEM/index.html b/SETTINGS_SYSTEM/index.html new file mode 100644 index 00000000..46956908 --- /dev/null +++ b/SETTINGS_SYSTEM/index.html @@ -0,0 +1,4305 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Settings - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+ +
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Settings

+ +

⚙ Setting system

+

This is an explanation how settings are handled intended for anyone thinking about writing their own plugin or contributing to the project.

+

If you are a user of the app, settings have a detailed description in the Settings section of the app. Open an issue if you'd like to clarify any of the settings.

+

🛢 Data storage

+

The source of truth for user-defined values is the app.conf file. Editing the file makes the App overwrite values in the Settings database table and in the table_settings.json file.

+

Settings database table

+

The Settings database table contains settings for App run purposes. The table is recreated every time the App restarts. The settings are loaded from the source-of-truth, that is the app.conf file. A high-level overview on the database structure can be found in the database documentation.

+

table_settings.json

+

This is the API endpoint that reflects the state of the Settings database table. Settings can be accessed with the:

+
    +
  • getSetting(key) JavaScript method
  • +
+

The json file is also cached on the client-side local storage of the browser.

+

app.conf

+
+

Note

+

This is the source of truth for settings. User-defined values in this files always override default values specified in the Plugin definition.

+
+

The App generates two app.conf entries for every setting (Since version 23.8+). One entry is the setting value, the second is the __metadata associated with the setting. This __metadata entry contains the full setting definition in JSON format. Currently unused, but intended to be used in future to extend the Settings system.

+

Plugin settings

+
+

Note

+

This is the preferred way adding settings going forward. I'll be likely migrating all app settings into plugin-based settings.

+
+

Plugin settings are loaded dynamically from the config.json of individual plugins. If a setting isn't defined in the app.conf file, it is initialized via the default_value property of a setting from the config.json file. Check the Plugins documentation, section ⚙ Setting object structure for details on the structure of the setting.

+

Screen 1

+

Settings Process flow

+

The process flow is mostly managed by the initialise.py file.

+

The script is responsible for reading user-defined values from a configuration file (app.conf), initializing settings, and importing them into a database. It also handles plugins and their configurations.

+

Here's a high-level description of the code:

+
    +
  1. Function Definitions:
  2. +
  3. +

    ccd: This function is used to handle user-defined settings and configurations. It takes several parameters related to the setting's name, default value, input type, options, group, and more. It saves the settings and their metadata in different lists (conf.mySettingsSQLsafe and conf.mySettings).

    +
  4. +
  5. +

    importConfigs: This function is the main entry point of the script. It imports user settings from a configuration file, processes them, and saves them to the database.

    +
  6. +
  7. +

    read_config_file: This function reads the configuration file (app.conf) and returns a dictionary containing the key-value pairs from the file.

    +
  8. +
  9. +

    Importing Configuration and Initializing Settings:

    +
  10. +
  11. +

    The importConfigs function starts by checking the modification time of the configuration file to determine if it needs to be re-imported. If the file has not been modified since the last import, the function skips the import process.

    +
  12. +
  13. +

    The function reads the configuration file using the read_config_file function, which returns a dictionary of settings.

    +
  14. +
  15. +

    The script then initializes various user-defined settings using the ccd function, based on the values read from the configuration file. These settings are categorized into groups such as "General," "Email," "Webhooks," "Apprise," and more.

    +
  16. +
  17. +

    Plugin Handling:

    +
  18. +
  19. The script loads and handles plugins dynamically. It retrieves plugin configurations and iterates through each plugin.
  20. +
  21. For each plugin, it extracts the prefix and settings related to that plugin and processes them similarly to other user-defined settings.
  22. +
  23. +

    It also handles scheduling for plugins with specific RUN_SCHD settings.

    +
  24. +
  25. +

    Saving Settings to the Database:

    +
  26. +
  27. +

    The script clears the existing settings in the database and inserts the updated settings into the database using SQL queries.

    +
  28. +
  29. +

    Updating the API and Performing Cleanup:

    +
  30. +
  31. After importing the configurations, the script updates the API to reflect the changes in the settings.
  32. +
  33. It saves the current timestamp to determine the next import time.
  34. +
  35. Finally, it logs the successful import of the new configuration.
  36. +
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/SMTP/index.html b/SMTP/index.html new file mode 100644 index 00000000..3a165e01 --- /dev/null +++ b/SMTP/index.html @@ -0,0 +1,4183 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Emails - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

📧 SMTP server guides

+

The SMTP plugin supports any SMTP server. Here are some commonly used services to help speed up your configuration.

+
+

Note

+

If you are using a self hosted SMTP server ssh into the container and verify (e.g. via ping) that your server is reachable from within the NetAlertX container. See also how to ssh into the container if you are running it as a Home Assistant addon.

+
+

Gmail

+
    +
  1. +

    Create an app password by following the instructions from Google, you need to Enable 2FA for this to work. +https://support.google.com/accounts/answer/185833

    +
  2. +
  3. +

    Specify the following settings:

    +
  4. +
+
    SMTP_RUN='on_notification'
+    SMTP_SKIP_TLS=True
+    SMTP_FORCE_SSL=True 
+    SMTP_PORT=465
+    SMTP_SERVER='smtp.gmail.com'
+    SMTP_PASS='16-digit passcode from google'
+    SMTP_REPORT_TO='some_target_email@gmail.com'
+
+

Brevo

+

Brevo allows for 300 free emails per day as of time of writing.

+
    +
  1. Create an account on Brevo: https://www.brevo.com/free-smtp-server/
  2. +
  3. Click your name -> SMTP & API
  4. +
  5. Click Generate a new SMTP key
  6. +
  7. Save the details and fill in the NetAlertX settings as below.
  8. +
+
SMTP_SERVER='smtp-relay.brevo.com'
+SMTP_PORT=587
+SMTP_SKIP_LOGIN=False
+SMTP_USER='user@email.com'
+SMTP_PASS='xsmtpsib-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxx'
+SMTP_SKIP_TLS=False
+SMTP_FORCE_SSL=False
+SMTP_REPORT_TO='some_target_email@gmail.com'
+SMTP_REPORT_FROM='NetAlertX <user@email.com>'
+
+

GMX

+
    +
  1. Go to your GMX account https://account.gmx.com
  2. +
  3. Under Security Options enable 2FA (Two-factor authentication)
  4. +
  5. Under Security Options generate an Application-specific password
  6. +
  7. Home -> Email Settings -> POP3 & IMAP -> Enable access to this account via POP3 and IMAP
  8. +
  9. In NetAlertX specify these settings:
  10. +
+
    SMTP_RUN='on_notification'
+    SMTP_SERVER='mail.gmx.com'
+    SMTP_PORT=465
+    SMTP_USER='gmx_email@gmx.com'
+    SMTP_PASS='<your Application-specific password>'
+    SMTP_SKIP_TLS=True
+    SMTP_FORCE_SSL=True
+    SMTP_SKIP_LOGIN=False
+    SMTP_REPORT_FROM='gmx_email@gmx.com' # this has to be the same email as in SMTP_USER
+    SMTP_REPORT_TO='some_target_email@gmail.com'
+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/SUBNETS/index.html b/SUBNETS/index.html new file mode 100644 index 00000000..903a71f5 --- /dev/null +++ b/SUBNETS/index.html @@ -0,0 +1,4333 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Subnets - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Subnets Configuration

+

You need to specify the network interface and the network mask. You can also configure multiple subnets and specify VLANs (see VLAN exceptions below).

+

ARPSCAN can scan multiple networks if the network allows it. To scan networks directly, the subnets must be accessible from the network where NetAlertX is running. This means NetAlertX needs to have access to the interface attached to that subnet.

+
+

Warning

+

If you don't see all expected devices run the following command in the NetAlertX container (replace the interface and ip mask): +sudo arp-scan --interface=eth0 192.168.1.0/24

+

If this command returns no results, the network is not accessible due to your network or firewall restrictions (Wi-Fi Extenders, VPNs and inaccessible networks). If direct scans are not possible, check the remote networks documentation for workarounds.

+
+

Example Values

+
+

Note

+

Please use the UI to configure settings as it ensures the config file is in the correct format. Edit app.conf directly only when really necessary.
+Settings location

+
+
    +
  • Examples for one and two subnets:
  • +
  • One subnet: SCAN_SUBNETS = ['192.168.1.0/24 --interface=eth0']
  • +
  • Two subnets: SCAN_SUBNETS = ['192.168.1.0/24 --interface=eth0','192.168.1.0/24 --interface=eth1 --vlan=107']
  • +
+
+

Tip

+

When adding more subnets, you may need to increase both the scan interval (ARPSCAN_RUN_SCHD) and the timeout (ARPSCAN_RUN_TIMEOUT)—as well as similar settings for related plugins.

+

If the timeout is too short, you may see timeout errors in the log. To prevent the application from hanging due to unresponsive plugins, scans are canceled when they exceed the timeout limit.

+

To fix this:
+- Reduce the subnet size (e.g., change /16 to /24).
+- Increase the timeout (e.g., set ARPSCAN_RUN_TIMEOUT to 300 for a 5-minute timeout).
+- Extend the scan interval (e.g., set ARPSCAN_RUN_SCHD to */10 * * * * to scan every 10 minutes).

+

For more troubleshooting tips, see Debugging Plugins.

+
+
+

Explanation

+

Network Mask

+

Example value: 192.168.1.0/24

+

The arp-scan time itself depends on the number of IP addresses to check.

+
+

The number of IPs to check depends on the network mask you set in the SCAN_SUBNETS setting.
+For example, a /24 mask results in 256 IPs to check, whereas a /16 mask checks around 65,536 IPs. Each IP takes a couple of seconds, so an incorrect configuration could make arp-scan take hours instead of seconds.

+
+

Specify the network filter, which significantly speeds up the scan process. For example, the filter 192.168.1.0/24 covers IP ranges from 192.168.1.0 to 192.168.1.255.

+

Network Interface (Adapter)

+

Example value: --interface=eth0

+

The adapter will probably be eth0 or eth1. (Check System Info > Network Hardware, or run iwconfig in the container to find your interface name(s)).

+

Network hardware

+
+

Tip

+

As an alternative to iwconfig, run ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' in your container to find your interface name(s) (e.g.: eth0, eth1): +bash +Synology-NAS:/# ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' +sit0@NONE +eth1 +eth0

+
+

VLANs

+

Example value: --vlan=107

+
    +
  • Append --vlan=107 to the SCAN_SUBNETS field (e.g.: 192.168.1.0/24 --interface=vmbr0 --vlan=107) for multiple VLANs.
  • +
+

VLANs on a Hyper-V Setup

+
+

Community-sourced content by mscreations from this discussion.

+
+

Tested Setup: Bare Metal → Hyper-V on Win Server 2019 → Ubuntu 22.04 VM → Docker → NetAlertX.

+

Approach 1 (may cause issues):
+Configure multiple network adapters in Hyper-V with distinct VLANs connected to each one using Hyper-V's network setup. However, this action can potentially lead to the Docker host's inability to handle network traffic correctly. This might interfere with other applications such as Authentik.

+

Approach 2 (working example):

+

Network connections to switches are configured as trunk and allow all VLANs access to the server.

+

By default, Hyper-V only allows untagged packets through to the VM interface, blocking VLAN-tagged packets. To fix this, follow these steps:

+
    +
  1. Run the following command in PowerShell on the Hyper-V machine:
  2. +
+

powershell + Set-VMNetworkAdapterVlan -VMName <Docker VM Name> -Trunk -NativeVlanId 0 -AllowedVlanIdList "<comma separated list of vlans>"

+
    +
  1. Within the VM, set up sub-interfaces for each VLAN to enable scanning. On Ubuntu 22.04, Netplan can be used. In /etc/netplan/00-installer-config.yaml, add VLAN definitions:
  2. +
+

yaml + network: + ethernets: + eth0: + dhcp4: yes + vlans: + eth0.2: + id: 2 + link: eth0 + addresses: [ "192.168.2.2/24" ] + routes: + - to: 192.168.2.0/24 + via: 192.168.1.1

+
    +
  1. Run sudo netplan apply to activate the interfaces for scanning in NetAlertX.
  2. +
+

In this case, use 192.168.2.0/24 --interface=eth0.2 in NetAlertX.

+

VLAN Support & Exceptions

+

Please note the accessibility of macvlans when configured on the same computer. This is a general networking behavior, but feel free to clarify via a PR/issue.

+
    +
  • NetAlertX does not detect the macvlan container when it is running on the same computer.
  • +
  • NetAlertX recognizes the macvlan container when it is running on a different computer.
  • +
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/SYNOLOGY_GUIDE/index.html b/SYNOLOGY_GUIDE/index.html new file mode 100644 index 00000000..4611c179 --- /dev/null +++ b/SYNOLOGY_GUIDE/index.html @@ -0,0 +1,4160 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Synology Guide - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Installation on a Synology NAS

+

There are different ways to install NetAlertX on a Synology, including SSH-ing into the machine and using the command line. For this guide, we will use the Project option in Container manager.

+

Create the folder structure

+

The folders you are creating below will contain the configuration and the database. Back them up regularly.

+
    +
  1. Create a parent folder named netalertx
  2. +
  3. Create a db sub-folder
  4. +
+

Folder structure +Folder structure +Folder structure

+
    +
  1. Create a config sub-folder
  2. +
+

Folder structure

+
    +
  1. Note down the folders Locations:
  2. +
+

Getting the location +Getting the location

+
    +
  1. Open Container manager -> Project and click Create.
  2. +
  3. +

    Fill in the details:

    +
  4. +
  5. +

    Project name: netalertx

    +
  6. +
  7. Path: /app_storage/netalertx (will differ from yours)
  8. +
  9. Paste in the following template:
  10. +
+
version: "3"
+services:
+  netalertx:
+    container_name: netalertx
+    # use the below line if you want to test the latest dev image
+    # image: "ghcr.io/jokob-sk/netalertx-dev:latest"
+    image: "ghcr.io/jokob-sk/netalertx:latest"
+    network_mode: "host"
+    restart: unless-stopped
+    cap_drop:       # Drop all capabilities for enhanced security
+      - ALL
+    cap_add:        # Re-add necessary capabilities
+      - NET_RAW
+      - NET_ADMIN
+      - NET_BIND_SERVICE
+    volumes:
+      - /app_storage/netalertx:/data
+      # to sync with system time
+      - /etc/localtime:/etc/localtime:ro
+    tmpfs:
+      # All writable runtime state resides under /tmp; comment out to persist logs between restarts
+      - "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
+    environment:
+      - PORT=20211
+
+

Project settings

+
    +
  1. +

    Replace the paths to your volume and comment out unnecessary line(s):

    +
  2. +
  3. +

    This is only an example, your paths will differ.

    +
  4. +
+
 volumes:
+      - /volume1/app_storage/netalertx:/data
+
+

Adjusting docker-compose

+
    +
  1. (optional) Change the port number from 20211 to an unused port if this port is already used.
  2. +
  3. Build the project:
  4. +
+

Build

+
    +
  1. Navigate to <Synology URL>:20211 (or your custom port).
  2. +
  3. Read the Subnets and Plugins docs to complete your setup.
  4. +
+
+

Tip

+

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

+

sudo chown -R 20211:20211 /local_data_dir

+

sudo chmod -R a+rwx /local_data_dir

+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/UPDATES/index.html b/UPDATES/index.html new file mode 100644 index 00000000..9d49edf8 --- /dev/null +++ b/UPDATES/index.html @@ -0,0 +1,4548 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Docker Updates - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Docker Update Strategies to upgrade NetAlertX

+
+

Warning

+

For versions prior to v25.6.7 upgrade to version v25.5.24 first (docker pull ghcr.io/jokob-sk/netalertx:25.5.24) as later versions don't support a full upgrade. Alternatively, devices and settings can be migrated manually, e.g. via CSV import. +See the Migration guide for details.

+
+

This guide outlines approaches for updating Docker containers, usually when upgrading to a newer version of NetAlertX. Each method offers different benefits depending on the situation. Here are the methods:

+
    +
  • Manual: Direct commands to stop, remove, and rebuild containers.
  • +
  • Dockcheck: Semi-automated with more control, suited for bulk updates.
  • +
  • Watchtower: Fully automated, runs continuously to check and update containers.
  • +
  • Portainer: Manual with UI.
  • +
+

You can choose any approach that fits your workflow.

+
+

In the examples I assume that the container name is netalertx and the image name is netalertx as well.

+
+
+

Note

+

See also Backup strategies to be on the safe side.

+
+

1. Manual Updates

+

Use this method when you need precise control over a single container or when dealing with a broken container that needs immediate attention. +Example Commands

+

To manually update the netalertx container, stop it, delete it, remove the old image, and start a fresh one with docker-compose.

+
# Stop the container
+sudo docker container stop netalertx
+
+# Remove the container
+sudo docker container rm netalertx
+
+# Remove the old image
+sudo docker image rm netalertx
+
+# Pull and start a new container
+sudo docker-compose up -d
+
+

Alternative: Force Pull with Docker Compose

+

You can also use --pull always to ensure Docker pulls the latest image before starting the container:

+
sudo docker-compose up --pull always -d
+
+

2. Dockcheck for Bulk Container Updates

+

Always check the Dockcheck docs if encountering issues with the guide below.

+

Dockcheck is a useful tool if you have multiple containers to update and some flexibility for handling potential issues that might arise during mass updates. Dockcheck allows you to inspect each container and decide when to update.

+

Example Workflow with Dockcheck

+

You might use Dockcheck to:

+
    +
  • Inspect container versions.
  • +
  • Pull the latest images in bulk.
  • +
  • Apply updates selectively.
  • +
+

Dockcheck can help streamline bulk updates, especially if you’re managing multiple containers.

+

Below is a script I use to run an update of the Dockcheck script and start a check for new containers:

+
cd /path/to/Docker &&
+rm dockcheck.sh &&
+wget https://raw.githubusercontent.com/mag37/dockcheck/main/dockcheck.sh &&
+sudo chmod +x dockcheck.sh &&
+sudo ./dockcheck.sh
+
+

3. Automated Updates with Watchtower

+

Always check the watchtower docs if encountering issues with the guide below.

+

Watchtower monitors your Docker containers and automatically updates them when new images are available. This is ideal for ongoing updates without manual intervention.

+

Setting Up Watchtower

+

1. Pull the Watchtower Image:

+
docker pull containrrr/watchtower
+
+

2. Run Watchtower to update all images:

+
docker run -d \
+  --name watchtower \
+  -v /var/run/docker.sock:/var/run/docker.sock \
+  containrrr/watchtower \
+  --interval 300 # Check for updates every 5 minutes
+
+

3. Run Watchtower to update only NetAlertX:

+

You can specify which containers to monitor by listing them. For example, to monitor netalertx only:

+
docker run -d \
+  --name watchtower \
+  -v /var/run/docker.sock:/var/run/docker.sock \
+  containrrr/watchtower netalertx
+
+
+

4. Portainer controlled image

+

This assumes you're using Portainer to manage Docker (or Docker Swarm) and want to pull the latest version of an image and redeploy the container.

+
+

Note

+ +
+

4.1 Steps to Update an Image in Portainer (Standalone Docker)

+
    +
  1. Login to Portainer.
  2. +
  3. Go to "Containers" in the left sidebar.
  4. +
  5. Find the container you want to update, click its name.
  6. +
  7. Click "Recreate" (top right).
  8. +
  9. Tick: Pull latest image (this ensures Portainer fetches the newest version from Docker Hub or your registry).
  10. +
  11. Click "Recreate" again.
  12. +
  13. Wait for the container to be stopped, removed, and recreated with the updated image.
  14. +
+

4.2 For Docker Swarm Services

+

If you're using Docker Swarm (under "Stacks" or "Services"):

+
    +
  1. Go to "Stacks".
  2. +
  3. Select the stack managing the container.
  4. +
  5. Click "Editor" (or "Update the Stack").
  6. +
  7. Add a version tag or use :latest if your image tag is latest (not recommended for production).
  8. +
  9. Click "Update the Stack". ⚠ Portainer will not pull the new image unless the tag changes OR the stack is forced to recreate.
  10. +
  11. If image tag hasn't changed, go to "Services", find the service, and click "Force Update".
  12. +
+

Summary

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MethodTypeProsCons
ManualCLIFull control, no dependenciesTedious for many containers
DockcheckCLI ScriptGreat for batch updatesNeeds setup, semi-automated
WatchtowerDaemonizedFully automated updatesLess control, version drift
PortainerUIEasy via web interfaceNo auto-updates
+

These approaches allow you to maintain flexibility in how you update Docker containers, depending on the urgency and scale of the update.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/VERSIONS/index.html b/VERSIONS/index.html new file mode 100644 index 00000000..fdd6e532 --- /dev/null +++ b/VERSIONS/index.html @@ -0,0 +1,4165 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Versions - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Versions

+ +

Am I running the latest released version?

+

Since version 23.01.14 NetAlertX uses a simple timestamp-based version check to verify if a new version is available. You can check the current and past releases here, or have a look at what I'm currently working on.

+

If you are not on the latest version, the app will notify you, that a new released version is avialable the following way:

+

📧 Via email on a notification event

+

If any notification occurs and an email is sent, the email will contain a note that a new version is available. See the sample email below:

+

Sample email if a new version is available

+

🆕 In the UI

+

In the UI via a notification Icon and via a custom message in the Maintenance section.

+

UI screenshot if a new version is available

+

For a comparison, this is how the UI looks like if you are on the latest stable image:

+

UI screenshot if on latest version

+

Implementation details

+

During build a /app/front/buildtimestamp.txt file is created. The app then periodically checks if a new release is available with a newer timestamp in GitHub's rest-based JSON endpoint (check the def isNewVersion: method for details).

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/WEBHOOK_N8N/index.html b/WEBHOOK_N8N/index.html new file mode 100644 index 00000000..6a081c9c --- /dev/null +++ b/WEBHOOK_N8N/index.html @@ -0,0 +1,4157 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Webhooks (n8n) - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Webhooks (n8n)

+ +

Create a simple n8n workflow

+
+

Note

+

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

+
+

N8N can be used for more advanced conditional notification use cases. For example, you want only to get notified if two out of a specified list of devices is down. Or you can use other plugins to process the notifiations further. The below is a simple example of sending an email on a webhook.

+

n8n workflow

+

Specify your email template

+

See sample JSON if you want to see the JSON paths used in the email template below +Email template

+
Events count: {{ $json["body"]["attachments"][0]["text"]["events"].length }}
+New devices count: {{ $json["body"]["attachments"][0]["text"]["new_devices"].length }}
+
+

Get your webhook in n8n

+

n8n webhook URL

+

Configure NetAlertX to point to the above URL

+

NetAlertX config

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/WEBHOOK_SECRET/index.html b/WEBHOOK_SECRET/index.html new file mode 100644 index 00000000..376d8a98 --- /dev/null +++ b/WEBHOOK_SECRET/index.html @@ -0,0 +1,4193 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Webhook Secret - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Webhook Secrets

+
+

Note

+

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

+
+

How does the signing work?

+

NetAlertX will use the configured secret to create a hash signature of the request body. This SHA256-HMAC signature will appear in the X-Webhook-Signature header of each request to the webhook target URL. You can use the value of this header to validate the request was sent by NetAlertX.

+

Activating webhook signatures

+

All you need to do in order to add a signature to the request headers is to set the WEBHOOK_SECRET config value to a non-empty string.

+

Validating webhook deliveries

+

There are a few things to keep in mind when validating the webhook delivery:

+
    +
  • NetAlertX uses an HMAC hex digest to compute the hash
  • +
  • The signature in the X-Webhook-Signature header always starts with sha256=
  • +
  • The hash signature is generated using the configured WEBHOOK_SECRET and the request body.
  • +
  • Never use a plain == operator. Instead, consider using a method like secure_compare or crypto.timingSafeEqual, which performs a "constant time" string comparison to help mitigate certain timing attacks against regular equality operators, or regular loops in JIT-optimized languages.
  • +
+

Testing the webhook payload validation

+

You can use the following secret and payload to verify that your implementation is working correctly.

+

secret: 'this is my secret'

+

payload: '{"test":"this is a test body"}'

+

If your implementation is correct, the signature you generated should match the following:

+

signature: bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

+

X-Webhook-Signature: sha256=bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

+

More information

+

If you want to learn more about webhook security, take a look at GitHub's webhook documentation.

+

You can find examples for validating a webhook delivery here.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/WEB_UI_PORT_DEBUG/index.html b/WEB_UI_PORT_DEBUG/index.html new file mode 100644 index 00000000..d4803eb9 --- /dev/null +++ b/WEB_UI_PORT_DEBUG/index.html @@ -0,0 +1,4324 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Web UI Port Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Debugging inaccessible UI

+

The application uses the following default ports:

+
    +
  • Web UI: 20211
  • +
  • GraphQL API: 20212
  • +
+

The Web UI is served by an nginx server, while the API backend runs on a Flask (Python) server.

+

Changing Ports

+
    +
  • To change the Web UI port, update the PORT environment variable in the docker-compose.yml file.
  • +
  • To change the GraphQL API port, use the GRAPHQL_PORT setting, either directly or via Docker: + yaml + APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20212"}
  • +
+

For more information, check the Docker installation guide.

+

Possible issues and troubleshooting

+

Follow all of the below in order to disqualify potential causes of issues and to troubleshoot these problems faster.

+

1. Port conflicts

+

When opening an issue or debugging:

+
    +
  1. Include a screenshot of what you see when accessing HTTP://<your_server>:20211 (or your custom port)
  2. +
  3. Follow steps 1, 2, 3, 4 on this page
  4. +
  5. Execute the following in the container to see the processes and their ports and submit a screenshot of the result:
  6. +
  7. sudo apk add lsof
  8. +
  9. sudo lsof -i
  10. +
  11. Try running the nginx command in the container:
  12. +
  13. if you get nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) try using a different port number
  14. +
+

lsof ports

+

2. JavaScript issues

+

Check for browser console (F12 browser dev console) errors + check different browsers.

+

3. Clear the app cache and cached JavaScript files

+

Refresh the browser cache (usually shoft + refresh), try a private window, or different browsers. Please also refresh the app cache by clicking the 🔃 (reload) button in the header of the application.

+

4. Disable proxies

+

If you have any reverse proxy or similar, try disabling it.

+

5. Disable your firewall

+

If you are using a firewall, try to temporarily disabling it.

+

6. Post your docker start details

+

If you haven't, post your docker compose/run command.

+

7. Check for errors in your PHP/NGINX error logs

+

In the container execute and investigate:

+

cat /var/log/nginx/error.log

+

cat /tmp/log/app.php_errors.log

+

8. Make sure permissions are correct

+
+

Tip

+

You can try to start the container without mapping the /data/config and /data/db dirs and if the UI shows up then the issue is most likely related to your file system permissions or file ownership.

+
+

Please read the Permissions troubleshooting guide and provide a screesnhot of the permissions and ownership in the /data/db and app/config directories.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/WORKFLOWS/index.html b/WORKFLOWS/index.html new file mode 100644 index 00000000..06e8c1f3 --- /dev/null +++ b/WORKFLOWS/index.html @@ -0,0 +1,4335 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Workflows - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Workflows Overview

+

The workflows module in allows to automate repetitive tasks, making network management more efficient. Whether you need to assign newly discovered devices to a specific Network Node, auto-group devices from a given vendor, unarchive a device if detected online, or automatically delete devices, this module provides the flexibility to tailor the automations to your needs.

+

Workflows diagram

+

Below are a few examples that demonstrate how this module can be used to simplify network management tasks.

+

Updating Workflows

+
+

Note

+

In order to apply a workflow change, you must first Save the changes and then reload the application by clicking Restart server.

+
+

Workflow components

+

Triggers

+

Trigger example

+

Triggers define the event that activates a workflow. They monitor changes to objects within the system, such as updates to devices or the insertion of new entries. When the specified event occurs, the workflow is executed.

+
+

Tip

+

Workflows not running? Check the Workflows debugging guide how to troubleshoot triggers and conditions.

+
+

Example Trigger:

+
    +
  • Object Type: Devices
  • +
  • Event Type: update
  • +
+

This trigger will activate when a Device object is updated.

+

Conditions

+

Conditions example

+

Conditions determine whether a workflow should proceed based on certain criteria. These criteria can be set for specific fields, such as whether a device is from a certain vendor, or whether it is new or archived. You can combine conditions using logical operators (AND, OR).

+
+

Tip

+

To better understand how to use specific Device fields, please read through the Database overview guide.

+
+

Example Condition:

+
    +
  • Logic: AND
  • +
  • Field: devVendor
  • +
  • Operator: contains (case in-sensitive)
  • +
  • Value: Google
  • +
+

This condition checks if the device's vendor is Google. The workflow will only proceed if the condition is true.

+

Actions

+

Actions example

+

Actions define the tasks that the workflow will perform once the conditions are met. Actions can include updating fields or deleting devices.

+

You can include multiple actions that should execute once the conditions are met.

+

Example Action:

+
    +
  • Action Type: update_field
  • +
  • Field: devIsNew
  • +
  • Value: 0
  • +
+

This action updates the devIsNew field to 0, marking the device as no longer new.

+

Examples

+

You can find a couple of configuration examples in Workflow Examples.

+
+

Tip

+

Share your workflows in Discord or GitHub Discussions.

+
+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/WORKFLOWS_DEBUGGING/index.html b/WORKFLOWS_DEBUGGING/index.html new file mode 100644 index 00000000..4f2f96ab --- /dev/null +++ b/WORKFLOWS_DEBUGGING/index.html @@ -0,0 +1,4090 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Workflows Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Workflows debugging and troubleshooting

+
+

Tip

+

Before troubleshooting, please ensure you have the right Debugging and LOG_LEVEL set.

+
+

Workflows are triggered by various events. These events are captured and listed in the Integrations -> App Events section of the application.

+

Troubleshooting triggers

+
+

Note

+

Workflow events are processed once every 5 seconds. However, if a scan or other background tasks are running, this can cause a delay up to a few minutes.

+
+

If an event doesn't trigger a workflow as expected, check the App Events section for the event. You can filter these by the ID of the device (devMAC or devGUID).

+

App events search

+

Once you find the Event Guid and Object GUID, use them to find relevant debug entries.

+

Navigate to Mainetenace -> Logs where you can filter the logs based on the Event or Object GUID.

+

Log events search

+

Below you can find some example app.log entries that will help you understand why a Workflow was or was not triggered.

+
16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Sample Device Update Workflow'
+16:27:03 [WF] self.triggered 'False' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {"object_type": "Devices", "event_type": "insert"}'
+16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Location Change'
+16:27:03 [WF] self.triggered 'True' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {"object_type": "Devices", "event_type": "update"}'
+16:27:03 [WF] Event with GUID '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggered the workflow 'Location Change'
+
+

Note how one trigger executed, but the other didn't based on different "event_type" values. One is "event_type": "insert", the other "event_type": "update".

+

Given the Event is a update event (note ...['online'], ['update'], [None]... in the event structure), the "event_type": "insert" trigger didn't execute.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/WORKFLOW_EXAMPLES/index.html b/WORKFLOW_EXAMPLES/index.html new file mode 100644 index 00000000..e1d38e9f --- /dev/null +++ b/WORKFLOW_EXAMPLES/index.html @@ -0,0 +1,4586 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Workflow Examples - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Workflow examples

+

Workflows in NetAlertX automate actions based on real-time events and conditions. Below are practical examples that demonstrate how to build automation using triggers, conditions, and actions.

+

Example 1: Un-archive devices if detected online

+

This workflow automatically unarchives a device if it was previously archived but has now been detected as online.

+

📋 Use Case

+

Sometimes devices are manually archived (e.g., no longer expected on the network), but they reappear unexpectedly. This workflow reverses the archive status when such devices are detected during a scan.

+

⚙️ Workflow Configuration

+
{
+  "name": "Un-archive devices if detected online",
+  "trigger": {
+    "object_type": "Devices",
+    "event_type": "update"
+  },
+  "conditions": [
+    {
+      "logic": "AND",
+      "conditions": [
+        {
+          "field": "devIsArchived",
+          "operator": "equals",
+          "value": "1"
+        },
+        {
+          "field": "devPresentLastScan",
+          "operator": "equals",
+          "value": "1"
+        }
+      ]
+    }
+  ],
+  "actions": [
+    {
+      "type": "update_field",
+      "field": "devIsArchived",
+      "value": "0"
+    }
+  ],
+  "enabled": "Yes"
+}
+
+

🔍 Explanation

+
- Trigger: Listens for updates to device records.
+- Conditions:
+    - `devIsArchived` is `1` (archived).
+    - `devPresentLastScan` is `1` (device was detected in the latest scan).
+- Action: Updates the device to set `devIsArchived` to `0` (unarchived).
+
+

✅ Result

+

Whenever a previously archived device shows up during a network scan, it will be automatically unarchived — allowing it to reappear in your device lists and dashboards.

+

Here is your updated version of Example 2 and Example 3, fully aligned with the format and structure of Example 1 for consistency and professionalism:

+
+

Example 2: Assign Device to Network Node Based on IP

+

This workflow assigns newly added devices with IP addresses in the 192.168.1.* range to a specific network node with MAC address 6c:6d:6d:6c:6c:6c.

+

📋 Use Case

+

When new devices join your network, assigning them to the correct network node is important for accurate topology and grouping. This workflow ensures devices in a specific subnet are automatically linked to the intended node.

+

⚙️ Workflow Configuration

+
{
+  "name": "Assign Device to Network Node Based on IP",
+  "trigger": {
+    "object_type": "Devices",
+    "event_type": "insert"
+  },
+  "conditions": [
+    {
+      "logic": "AND",
+      "conditions": [
+        {
+          "field": "devLastIP",
+          "operator": "contains",
+          "value": "192.168.1."
+        }
+      ]
+    }
+  ],
+  "actions": [
+    {
+      "type": "update_field",
+      "field": "devNetworkNode",
+      "value": "6c:6d:6d:6c:6c:6c"
+    }
+  ],
+  "enabled": "Yes"
+}
+
+

🔍 Explanation

+
    +
  • Trigger: Activates when a new device is added.
  • +
  • +

    Condition:

    +
  • +
  • +

    devLastIP contains 192.168.1. (matches subnet).

    +
  • +
  • +

    Action:

    +
  • +
  • +

    Sets devNetworkNode to the specified MAC address.

    +
  • +
+

✅ Result

+

New devices with IPs in the 192.168.1.* subnet are automatically assigned to the correct network node, streamlining device organization and reducing manual work.

+
+

Example 3: Mark Device as Not New and Delete If from Google Vendor

+

This workflow automatically marks newly detected Google devices as not new and deletes them immediately.

+

📋 Use Case

+

You may want to automatically clear out newly detected Google devices (such as Chromecast or Google Home) if they’re not needed in your device database. This workflow handles that clean-up automatically.

+

⚙️ Workflow Configuration

+
{
+  "name": "Mark Device as Not New and Delete If from Google Vendor",
+  "trigger": {
+    "object_type": "Devices",
+    "event_type": "update"
+  },
+  "conditions": [
+    {
+      "logic": "AND",
+      "conditions": [
+        {
+          "field": "devVendor",
+          "operator": "contains",
+          "value": "Google"
+        },
+        {
+          "field": "devIsNew",
+          "operator": "equals",
+          "value": "1"
+        }
+      ]
+    }
+  ],
+  "actions": [
+    {
+      "type": "update_field",
+      "field": "devIsNew",
+      "value": "0"
+    },
+    {
+      "type": "delete_device"
+    }
+  ],
+  "enabled": "Yes"
+}
+
+

🔍 Explanation

+
    +
  • Trigger: Runs on device updates.
  • +
  • +

    Conditions:

    +
  • +
  • +

    Vendor contains Google.

    +
  • +
  • Device is marked as new (devIsNew is 1).
  • +
  • +

    Actions:

    +
  • +
  • +

    Set devIsNew to 0 (mark as not new).

    +
  • +
  • Delete the device.
  • +
+

✅ Result

+

Any newly detected Google devices are cleaned up instantly — first marked as not new, then deleted — helping you avoid clutter in your device records.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.e71a0d61.min.js b/assets/javascripts/bundle.e71a0d61.min.js new file mode 100644 index 00000000..c76b3b2b --- /dev/null +++ b/assets/javascripts/bundle.e71a0d61.min.js @@ -0,0 +1,16 @@ +"use strict";(()=>{var Zi=Object.create;var _r=Object.defineProperty;var ea=Object.getOwnPropertyDescriptor;var ta=Object.getOwnPropertyNames,Bt=Object.getOwnPropertySymbols,ra=Object.getPrototypeOf,Ar=Object.prototype.hasOwnProperty,bo=Object.prototype.propertyIsEnumerable;var ho=(e,t,r)=>t in e?_r(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,P=(e,t)=>{for(var r in t||(t={}))Ar.call(t,r)&&ho(e,r,t[r]);if(Bt)for(var r of Bt(t))bo.call(t,r)&&ho(e,r,t[r]);return e};var vo=(e,t)=>{var r={};for(var o in e)Ar.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Bt)for(var o of Bt(e))t.indexOf(o)<0&&bo.call(e,o)&&(r[o]=e[o]);return r};var Cr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var oa=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of ta(t))!Ar.call(e,n)&&n!==r&&_r(e,n,{get:()=>t[n],enumerable:!(o=ea(t,n))||o.enumerable});return e};var $t=(e,t,r)=>(r=e!=null?Zi(ra(e)):{},oa(t||!e||!e.__esModule?_r(r,"default",{value:e,enumerable:!0}):r,e));var go=(e,t,r)=>new Promise((o,n)=>{var i=c=>{try{a(r.next(c))}catch(p){n(p)}},s=c=>{try{a(r.throw(c))}catch(p){n(p)}},a=c=>c.done?o(c.value):Promise.resolve(c.value).then(i,s);a((r=r.apply(e,t)).next())});var xo=Cr((kr,yo)=>{(function(e,t){typeof kr=="object"&&typeof yo!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(kr,(function(){"use strict";function e(r){var o=!0,n=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(k){return!!(k&&k!==document&&k.nodeName!=="HTML"&&k.nodeName!=="BODY"&&"classList"in k&&"contains"in k.classList)}function c(k){var ut=k.type,je=k.tagName;return!!(je==="INPUT"&&s[ut]&&!k.readOnly||je==="TEXTAREA"&&!k.readOnly||k.isContentEditable)}function p(k){k.classList.contains("focus-visible")||(k.classList.add("focus-visible"),k.setAttribute("data-focus-visible-added",""))}function l(k){k.hasAttribute("data-focus-visible-added")&&(k.classList.remove("focus-visible"),k.removeAttribute("data-focus-visible-added"))}function f(k){k.metaKey||k.altKey||k.ctrlKey||(a(r.activeElement)&&p(r.activeElement),o=!0)}function u(k){o=!1}function d(k){a(k.target)&&(o||c(k.target))&&p(k.target)}function v(k){a(k.target)&&(k.target.classList.contains("focus-visible")||k.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(k.target))}function S(k){document.visibilityState==="hidden"&&(n&&(o=!0),X())}function X(){document.addEventListener("mousemove",ee),document.addEventListener("mousedown",ee),document.addEventListener("mouseup",ee),document.addEventListener("pointermove",ee),document.addEventListener("pointerdown",ee),document.addEventListener("pointerup",ee),document.addEventListener("touchmove",ee),document.addEventListener("touchstart",ee),document.addEventListener("touchend",ee)}function re(){document.removeEventListener("mousemove",ee),document.removeEventListener("mousedown",ee),document.removeEventListener("mouseup",ee),document.removeEventListener("pointermove",ee),document.removeEventListener("pointerdown",ee),document.removeEventListener("pointerup",ee),document.removeEventListener("touchmove",ee),document.removeEventListener("touchstart",ee),document.removeEventListener("touchend",ee)}function ee(k){k.target.nodeName&&k.target.nodeName.toLowerCase()==="html"||(o=!1,re())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",S,!0),X(),r.addEventListener("focus",d,!0),r.addEventListener("blur",v,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)}))});var ro=Cr((jy,Rn)=>{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var qa=/["'&<>]/;Rn.exports=Ka;function Ka(e){var t=""+e,r=qa.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Nt=="object"&&typeof io=="object"?io.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Nt=="object"?Nt.ClipboardJS=r():t.ClipboardJS=r()})(Nt,function(){return(function(){var e={686:(function(o,n,i){"use strict";i.d(n,{default:function(){return Xi}});var s=i(279),a=i.n(s),c=i(370),p=i.n(c),l=i(817),f=i.n(l);function u(q){try{return document.execCommand(q)}catch(C){return!1}}var d=function(C){var _=f()(C);return u("cut"),_},v=d;function S(q){var C=document.documentElement.getAttribute("dir")==="rtl",_=document.createElement("textarea");_.style.fontSize="12pt",_.style.border="0",_.style.padding="0",_.style.margin="0",_.style.position="absolute",_.style[C?"right":"left"]="-9999px";var D=window.pageYOffset||document.documentElement.scrollTop;return _.style.top="".concat(D,"px"),_.setAttribute("readonly",""),_.value=q,_}var X=function(C,_){var D=S(C);_.container.appendChild(D);var N=f()(D);return u("copy"),D.remove(),N},re=function(C){var _=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},D="";return typeof C=="string"?D=X(C,_):C instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(C==null?void 0:C.type)?D=X(C.value,_):(D=f()(C),u("copy")),D},ee=re;function k(q){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?k=function(_){return typeof _}:k=function(_){return _&&typeof Symbol=="function"&&_.constructor===Symbol&&_!==Symbol.prototype?"symbol":typeof _},k(q)}var ut=function(){var C=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},_=C.action,D=_===void 0?"copy":_,N=C.container,G=C.target,We=C.text;if(D!=="copy"&&D!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(G!==void 0)if(G&&k(G)==="object"&&G.nodeType===1){if(D==="copy"&&G.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(D==="cut"&&(G.hasAttribute("readonly")||G.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(We)return ee(We,{container:N});if(G)return D==="cut"?v(G):ee(G,{container:N})},je=ut;function R(q){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?R=function(_){return typeof _}:R=function(_){return _&&typeof Symbol=="function"&&_.constructor===Symbol&&_!==Symbol.prototype?"symbol":typeof _},R(q)}function se(q,C){if(!(q instanceof C))throw new TypeError("Cannot call a class as a function")}function ce(q,C){for(var _=0;_0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof N.action=="function"?N.action:this.defaultAction,this.target=typeof N.target=="function"?N.target:this.defaultTarget,this.text=typeof N.text=="function"?N.text:this.defaultText,this.container=R(N.container)==="object"?N.container:document.body}},{key:"listenClick",value:function(N){var G=this;this.listener=p()(N,"click",function(We){return G.onClick(We)})}},{key:"onClick",value:function(N){var G=N.delegateTarget||N.currentTarget,We=this.action(G)||"copy",Yt=je({action:We,container:this.container,target:this.target(G),text:this.text(G)});this.emit(Yt?"success":"error",{action:We,text:Yt,trigger:G,clearSelection:function(){G&&G.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(N){return Mr("action",N)}},{key:"defaultTarget",value:function(N){var G=Mr("target",N);if(G)return document.querySelector(G)}},{key:"defaultText",value:function(N){return Mr("text",N)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(N){var G=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return ee(N,G)}},{key:"cut",value:function(N){return v(N)}},{key:"isSupported",value:function(){var N=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],G=typeof N=="string"?[N]:N,We=!!document.queryCommandSupported;return G.forEach(function(Yt){We=We&&!!document.queryCommandSupported(Yt)}),We}}]),_})(a()),Xi=Ji}),828:(function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,c){for(;a&&a.nodeType!==n;){if(typeof a.matches=="function"&&a.matches(c))return a;a=a.parentNode}}o.exports=s}),438:(function(o,n,i){var s=i(828);function a(l,f,u,d,v){var S=p.apply(this,arguments);return l.addEventListener(u,S,v),{destroy:function(){l.removeEventListener(u,S,v)}}}function c(l,f,u,d,v){return typeof l.addEventListener=="function"?a.apply(null,arguments):typeof u=="function"?a.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(S){return a(S,f,u,d,v)}))}function p(l,f,u,d){return function(v){v.delegateTarget=s(v.target,f),v.delegateTarget&&d.call(l,v)}}o.exports=c}),879:(function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}}),370:(function(o,n,i){var s=i(879),a=i(438);function c(u,d,v){if(!u&&!d&&!v)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(v))throw new TypeError("Third argument must be a Function");if(s.node(u))return p(u,d,v);if(s.nodeList(u))return l(u,d,v);if(s.string(u))return f(u,d,v);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function p(u,d,v){return u.addEventListener(d,v),{destroy:function(){u.removeEventListener(d,v)}}}function l(u,d,v){return Array.prototype.forEach.call(u,function(S){S.addEventListener(d,v)}),{destroy:function(){Array.prototype.forEach.call(u,function(S){S.removeEventListener(d,v)})}}}function f(u,d,v){return a(document.body,u,d,v)}o.exports=c}),817:(function(o){function n(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),p=document.createRange();p.selectNodeContents(i),c.removeAllRanges(),c.addRange(p),s=c.toString()}return s}o.exports=n}),279:(function(o){function n(){}n.prototype={on:function(i,s,a){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var c=this;function p(){c.off(i,p),s.apply(a,arguments)}return p._=s,this.on(i,p,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),c=0,p=a.length;for(c;c0&&i[i.length-1])&&(p[0]===6||p[0]===2)){r=0;continue}if(p[0]===3&&(!i||p[1]>i[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function K(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],s;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(a){s={error:a}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(s)throw s.error}}return i}function B(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||c(d,S)})},v&&(n[d]=v(n[d])))}function c(d,v){try{p(o[d](v))}catch(S){u(i[0][3],S)}}function p(d){d.value instanceof dt?Promise.resolve(d.value.v).then(l,f):u(i[0][2],d)}function l(d){c("next",d)}function f(d){c("throw",d)}function u(d,v){d(v),i.shift(),i.length&&c(i[0][0],i[0][1])}}function To(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof Oe=="function"?Oe(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(s){return new Promise(function(a,c){s=e[i](s),n(a,c,s.done,s.value)})}}function n(i,s,a,c){Promise.resolve(c).then(function(p){i({value:p,done:a})},s)}}function I(e){return typeof e=="function"}function yt(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var Jt=yt(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function Ze(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var qe=(function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var s=this._parentage;if(s)if(this._parentage=null,Array.isArray(s))try{for(var a=Oe(s),c=a.next();!c.done;c=a.next()){var p=c.value;p.remove(this)}}catch(S){t={error:S}}finally{try{c&&!c.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}else s.remove(this);var l=this.initialTeardown;if(I(l))try{l()}catch(S){i=S instanceof Jt?S.errors:[S]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=Oe(f),d=u.next();!d.done;d=u.next()){var v=d.value;try{So(v)}catch(S){i=i!=null?i:[],S instanceof Jt?i=B(B([],K(i)),K(S.errors)):i.push(S)}}}catch(S){o={error:S}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new Jt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)So(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Ze(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Ze(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=(function(){var t=new e;return t.closed=!0,t})(),e})();var $r=qe.EMPTY;function Xt(e){return e instanceof qe||e&&"closed"in e&&I(e.remove)&&I(e.add)&&I(e.unsubscribe)}function So(e){I(e)?e():e.unsubscribe()}var De={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var xt={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,s=n.isStopped,a=n.observers;return i||s?$r:(this.currentObservers=null,a.push(r),new qe(function(){o.currentObservers=null,Ze(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,s=o.isStopped;n?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new F;return r.source=this,r},t.create=function(r,o){return new Ho(r,o)},t})(F);var Ho=(function(e){ie(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:$r},t})(T);var jr=(function(e){ie(t,e);function t(r){var o=e.call(this)||this;return o._value=r,o}return Object.defineProperty(t.prototype,"value",{get:function(){return this.getValue()},enumerable:!1,configurable:!0}),t.prototype._subscribe=function(r){var o=e.prototype._subscribe.call(this,r);return!o.closed&&r.next(this._value),o},t.prototype.getValue=function(){var r=this,o=r.hasError,n=r.thrownError,i=r._value;if(o)throw n;return this._throwIfClosed(),i},t.prototype.next=function(r){e.prototype.next.call(this,this._value=r)},t})(T);var Rt={now:function(){return(Rt.delegate||Date).now()},delegate:void 0};var It=(function(e){ie(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=Rt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,s=o._infiniteTimeWindow,a=o._timestampProvider,c=o._windowTime;n||(i.push(r),!s&&i.push(a.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,s=n._buffer,a=s.slice(),c=0;c0?e.prototype.schedule.call(this,r,o):(this.delay=o,this.state=r,this.scheduler.flush(this),this)},t.prototype.execute=function(r,o){return o>0||this.closed?e.prototype.execute.call(this,r,o):this._execute(r,o)},t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!=null&&n>0||n==null&&this.delay>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.flush(this),0)},t})(St);var Ro=(function(e){ie(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t})(Ot);var Dr=new Ro(Po);var Io=(function(e){ie(t,e);function t(r,o){var n=e.call(this,r,o)||this;return n.scheduler=r,n.work=o,n}return t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!==null&&n>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=Tt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var s=r.actions;o!=null&&o===r._scheduled&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==o&&(Tt.cancelAnimationFrame(o),r._scheduled=void 0)},t})(St);var Fo=(function(e){ie(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o;r?o=r.id:(o=this._scheduled,this._scheduled=void 0);var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t})(Ot);var ye=new Fo(Io);var y=new F(function(e){return e.complete()});function tr(e){return e&&I(e.schedule)}function Vr(e){return e[e.length-1]}function pt(e){return I(Vr(e))?e.pop():void 0}function Fe(e){return tr(Vr(e))?e.pop():void 0}function rr(e,t){return typeof Vr(e)=="number"?e.pop():t}var Lt=(function(e){return e&&typeof e.length=="number"&&typeof e!="function"});function or(e){return I(e==null?void 0:e.then)}function nr(e){return I(e[wt])}function ir(e){return Symbol.asyncIterator&&I(e==null?void 0:e[Symbol.asyncIterator])}function ar(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function fa(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var sr=fa();function cr(e){return I(e==null?void 0:e[sr])}function pr(e){return wo(this,arguments,function(){var r,o,n,i;return Gt(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,dt(r.read())];case 3:return o=s.sent(),n=o.value,i=o.done,i?[4,dt(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,dt(n)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function lr(e){return I(e==null?void 0:e.getReader)}function U(e){if(e instanceof F)return e;if(e!=null){if(nr(e))return ua(e);if(Lt(e))return da(e);if(or(e))return ha(e);if(ir(e))return jo(e);if(cr(e))return ba(e);if(lr(e))return va(e)}throw ar(e)}function ua(e){return new F(function(t){var r=e[wt]();if(I(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function da(e){return new F(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?g(function(n,i){return e(n,i,o)}):be,Ee(1),r?Qe(t):tn(function(){return new fr}))}}function Yr(e){return e<=0?function(){return y}:E(function(t,r){var o=[];t.subscribe(w(r,function(n){o.push(n),e=2,!0))}function le(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new T}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,c=a===void 0?!0:a;return function(p){var l,f,u,d=0,v=!1,S=!1,X=function(){f==null||f.unsubscribe(),f=void 0},re=function(){X(),l=u=void 0,v=S=!1},ee=function(){var k=l;re(),k==null||k.unsubscribe()};return E(function(k,ut){d++,!S&&!v&&X();var je=u=u!=null?u:r();ut.add(function(){d--,d===0&&!S&&!v&&(f=Br(ee,c))}),je.subscribe(ut),!l&&d>0&&(l=new bt({next:function(R){return je.next(R)},error:function(R){S=!0,X(),f=Br(re,n,R),je.error(R)},complete:function(){v=!0,X(),f=Br(re,s),je.complete()}}),U(k).subscribe(l))})(p)}}function Br(e,t){for(var r=[],o=2;oe.next(document)),e}function M(e,t=document){return Array.from(t.querySelectorAll(e))}function j(e,t=document){let r=ue(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ue(e,t=document){return t.querySelector(e)||void 0}function Ne(){var e,t,r,o;return(o=(r=(t=(e=document.activeElement)==null?void 0:e.shadowRoot)==null?void 0:t.activeElement)!=null?r:document.activeElement)!=null?o:void 0}var Ra=L(h(document.body,"focusin"),h(document.body,"focusout")).pipe(Ae(1),Q(void 0),m(()=>Ne()||document.body),Z(1));function Ye(e){return Ra.pipe(m(t=>e.contains(t)),Y())}function it(e,t){return H(()=>L(h(e,"mouseenter").pipe(m(()=>!0)),h(e,"mouseleave").pipe(m(()=>!1))).pipe(t?jt(r=>He(+!r*t)):be,Q(e.matches(":hover"))))}function sn(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)sn(e,r)}function x(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)sn(o,n);return o}function br(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function _t(e){let t=x("script",{src:e});return H(()=>(document.head.appendChild(t),L(h(t,"load"),h(t,"error").pipe(b(()=>Nr(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),A(()=>document.head.removeChild(t)),Ee(1))))}var cn=new T,Ia=H(()=>typeof ResizeObserver=="undefined"?_t("https://unpkg.com/resize-observer-polyfill"):$(void 0)).pipe(m(()=>new ResizeObserver(e=>e.forEach(t=>cn.next(t)))),b(e=>L(tt,$(e)).pipe(A(()=>e.disconnect()))),Z(1));function de(e){return{width:e.offsetWidth,height:e.offsetHeight}}function Le(e){let t=e;for(;t.clientWidth===0&&t.parentElement;)t=t.parentElement;return Ia.pipe(O(r=>r.observe(t)),b(r=>cn.pipe(g(o=>o.target===t),A(()=>r.unobserve(t)))),m(()=>de(e)),Q(de(e)))}function At(e){return{width:e.scrollWidth,height:e.scrollHeight}}function vr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}function pn(e){let t=[],r=e.parentElement;for(;r;)(e.clientWidth>r.clientWidth||e.clientHeight>r.clientHeight)&&t.push(r),r=(e=r).parentElement;return t.length===0&&t.push(document.documentElement),t}function Be(e){return{x:e.offsetLeft,y:e.offsetTop}}function ln(e){let t=e.getBoundingClientRect();return{x:t.x+window.scrollX,y:t.y+window.scrollY}}function mn(e){return L(h(window,"load"),h(window,"resize")).pipe($e(0,ye),m(()=>Be(e)),Q(Be(e)))}function gr(e){return{x:e.scrollLeft,y:e.scrollTop}}function Ge(e){return L(h(e,"scroll"),h(window,"scroll"),h(window,"resize")).pipe($e(0,ye),m(()=>gr(e)),Q(gr(e)))}var fn=new T,Fa=H(()=>$(new IntersectionObserver(e=>{for(let t of e)fn.next(t)},{threshold:0}))).pipe(b(e=>L(tt,$(e)).pipe(A(()=>e.disconnect()))),Z(1));function mt(e){return Fa.pipe(O(t=>t.observe(e)),b(t=>fn.pipe(g(({target:r})=>r===e),A(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function un(e,t=16){return Ge(e).pipe(m(({y:r})=>{let o=de(e),n=At(e);return r>=n.height-o.height-t}),Y())}var yr={drawer:j("[data-md-toggle=drawer]"),search:j("[data-md-toggle=search]")};function dn(e){return yr[e].checked}function at(e,t){yr[e].checked!==t&&yr[e].click()}function Je(e){let t=yr[e];return h(t,"change").pipe(m(()=>t.checked),Q(t.checked))}function ja(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ua(){return L(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(Q(!1))}function hn(){let e=h(window,"keydown").pipe(g(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:dn("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),g(({mode:t,type:r})=>{if(t==="global"){let o=Ne();if(typeof o!="undefined")return!ja(o,r)}return!0}),le());return Ua().pipe(b(t=>t?y:e))}function we(){return new URL(location.href)}function st(e,t=!1){if(V("navigation.instant")&&!t){let r=x("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function bn(){return new T}function vn(){return location.hash.slice(1)}function gn(e){let t=x("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Zr(e){return L(h(window,"hashchange"),e).pipe(m(vn),Q(vn()),g(t=>t.length>0),Z(1))}function yn(e){return Zr(e).pipe(m(t=>ue(`[id="${t}"]`)),g(t=>typeof t!="undefined"))}function Wt(e){let t=matchMedia(e);return ur(r=>t.addListener(()=>r(t.matches))).pipe(Q(t.matches))}function xn(){let e=matchMedia("print");return L(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(Q(e.matches))}function eo(e,t){return e.pipe(b(r=>r?t():y))}function to(e,t){return new F(r=>{let o=new XMLHttpRequest;return o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network error"))}),o.addEventListener("abort",()=>{r.complete()}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{var i;if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let s=(i=o.getResponseHeader("Content-Length"))!=null?i:0;t.progress$.next(n.loaded/+s*100)}}),t.progress$.next(5)),o.send(),()=>o.abort()})}function ze(e,t){return to(e,t).pipe(b(r=>r.text()),m(r=>JSON.parse(r)),Z(1))}function xr(e,t){let r=new DOMParser;return to(e,t).pipe(b(o=>o.text()),m(o=>r.parseFromString(o,"text/html")),Z(1))}function En(e,t){let r=new DOMParser;return to(e,t).pipe(b(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),Z(1))}function wn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function Tn(){return L(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(wn),Q(wn()))}function Sn(){return{width:innerWidth,height:innerHeight}}function On(){return h(window,"resize",{passive:!0}).pipe(m(Sn),Q(Sn()))}function Ln(){return z([Tn(),On()]).pipe(m(([e,t])=>({offset:e,size:t})),Z(1))}function Er(e,{viewport$:t,header$:r}){let o=t.pipe(ne("size")),n=z([o,r]).pipe(m(()=>Be(e)));return z([r,t,n]).pipe(m(([{height:i},{offset:s,size:a},{x:c,y:p}])=>({offset:{x:s.x-c,y:s.y-p+i},size:a})))}function Wa(e){return h(e,"message",t=>t.data)}function Da(e){let t=new T;return t.subscribe(r=>e.postMessage(r)),t}function Mn(e,t=new Worker(e)){let r=Wa(t),o=Da(t),n=new T;n.subscribe(o);let i=o.pipe(oe(),ae(!0));return n.pipe(oe(),Ve(r.pipe(W(i))),le())}var Va=j("#__config"),Ct=JSON.parse(Va.textContent);Ct.base=`${new URL(Ct.base,we())}`;function Te(){return Ct}function V(e){return Ct.features.includes(e)}function Me(e,t){return typeof t!="undefined"?Ct.translations[e].replace("#",t.toString()):Ct.translations[e]}function Ce(e,t=document){return j(`[data-md-component=${e}]`,t)}function me(e,t=document){return M(`[data-md-component=${e}]`,t)}function Na(e){let t=j(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>j(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function _n(e){if(!V("announce.dismiss")||!e.childElementCount)return y;if(!e.hidden){let t=j(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return H(()=>{let t=new T;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),Na(e).pipe(O(r=>t.next(r)),A(()=>t.complete()),m(r=>P({ref:e},r)))})}function za(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function An(e,t){let r=new T;return r.subscribe(({hidden:o})=>{e.hidden=o}),za(e,t).pipe(O(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))}function Dt(e,t){return t==="inline"?x("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"})):x("div",{class:"md-tooltip",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"}))}function wr(...e){return x("div",{class:"md-tooltip2",role:"dialog"},x("div",{class:"md-tooltip2__inner md-typeset"},e))}function Cn(...e){return x("div",{class:"md-tooltip2",role:"tooltip"},x("div",{class:"md-tooltip2__inner md-typeset"},e))}function kn(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return x("aside",{class:"md-annotation",tabIndex:0},Dt(t),x("a",{href:r,class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}else return x("aside",{class:"md-annotation",tabIndex:0},Dt(t),x("span",{class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}function Hn(e){return x("button",{class:"md-code__button",title:Me("clipboard.copy"),"data-clipboard-target":`#${e} > code`,"data-md-type":"copy"})}function $n(){return x("button",{class:"md-code__button",title:"Toggle line selection","data-md-type":"select"})}function Pn(){return x("nav",{class:"md-code__nav"})}var In=$t(ro());function oo(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(c=>!e.terms[c]).reduce((c,p)=>[...c,x("del",null,(0,In.default)(p))," "],[]).slice(0,-1),i=Te(),s=new URL(e.location,i.base);V("search.highlight")&&s.searchParams.set("h",Object.entries(e.terms).filter(([,c])=>c).reduce((c,[p])=>`${c} ${p}`.trim(),""));let{tags:a}=Te();return x("a",{href:`${s}`,class:"md-search-result__link",tabIndex:-1},x("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&x("div",{class:"md-search-result__icon md-icon"}),r>0&&x("h1",null,e.title),r<=0&&x("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&x("nav",{class:"md-tags"},e.tags.map(c=>{let p=a?c in a?`md-tag-icon md-tag--${a[c]}`:"md-tag-icon":"";return x("span",{class:`md-tag ${p}`},c)})),o>0&&n.length>0&&x("p",{class:"md-search-result__terms"},Me("search.result.term.missing"),": ",...n)))}function Fn(e){let t=e[0].score,r=[...e],o=Te(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),s=r.findIndex(l=>l.scoreoo(l,1)),...c.length?[x("details",{class:"md-search-result__more"},x("summary",{tabIndex:-1},x("div",null,c.length>0&&c.length===1?Me("search.result.more.one"):Me("search.result.more.other",c.length))),...c.map(l=>oo(l,1)))]:[]];return x("li",{class:"md-search-result__item"},p)}function jn(e){return x("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>x("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?br(r):r)))}function no(e){let t=`tabbed-control tabbed-control--${e}`;return x("div",{class:t,hidden:!0},x("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function Un(e){return x("div",{class:"md-typeset__scrollwrap"},x("div",{class:"md-typeset__table"},e))}function Qa(e){var o;let t=Te(),r=new URL(`../${e.version}/`,t.base);return x("li",{class:"md-version__item"},x("a",{href:`${r}`,class:"md-version__link"},e.title,((o=t.version)==null?void 0:o.alias)&&e.aliases.length>0&&x("span",{class:"md-version__alias"},e.aliases[0])))}function Wn(e,t){var o;let r=Te();return e=e.filter(n=>{var i;return!((i=n.properties)!=null&&i.hidden)}),x("div",{class:"md-version"},x("button",{class:"md-version__current","aria-label":Me("select.version")},t.title,((o=r.version)==null?void 0:o.alias)&&t.aliases.length>0&&x("span",{class:"md-version__alias"},t.aliases[0])),x("ul",{class:"md-version__list"},e.map(Qa)))}var Ya=0;function Ba(e,t=250){let r=z([Ye(e),it(e,t)]).pipe(m(([n,i])=>n||i),Y()),o=H(()=>pn(e)).pipe(J(Ge),gt(1),Pe(r),m(()=>ln(e)));return r.pipe(Re(n=>n),b(()=>z([r,o])),m(([n,i])=>({active:n,offset:i})),le())}function Vt(e,t,r=250){let{content$:o,viewport$:n}=t,i=`__tooltip2_${Ya++}`;return H(()=>{let s=new T,a=new jr(!1);s.pipe(oe(),ae(!1)).subscribe(a);let c=a.pipe(jt(l=>He(+!l*250,Dr)),Y(),b(l=>l?o:y),O(l=>l.id=i),le());z([s.pipe(m(({active:l})=>l)),c.pipe(b(l=>it(l,250)),Q(!1))]).pipe(m(l=>l.some(f=>f))).subscribe(a);let p=a.pipe(g(l=>l),te(c,n),m(([l,f,{size:u}])=>{let d=e.getBoundingClientRect(),v=d.width/2;if(f.role==="tooltip")return{x:v,y:8+d.height};if(d.y>=u.height/2){let{height:S}=de(f);return{x:v,y:-16-S}}else return{x:v,y:16+d.height}}));return z([c,s,p]).subscribe(([l,{offset:f},u])=>{l.style.setProperty("--md-tooltip-host-x",`${f.x}px`),l.style.setProperty("--md-tooltip-host-y",`${f.y}px`),l.style.setProperty("--md-tooltip-x",`${u.x}px`),l.style.setProperty("--md-tooltip-y",`${u.y}px`),l.classList.toggle("md-tooltip2--top",u.y<0),l.classList.toggle("md-tooltip2--bottom",u.y>=0)}),a.pipe(g(l=>l),te(c,(l,f)=>f),g(l=>l.role==="tooltip")).subscribe(l=>{let f=de(j(":scope > *",l));l.style.setProperty("--md-tooltip-width",`${f.width}px`),l.style.setProperty("--md-tooltip-tail","0px")}),a.pipe(Y(),xe(ye),te(c)).subscribe(([l,f])=>{f.classList.toggle("md-tooltip2--active",l)}),z([a.pipe(g(l=>l)),c]).subscribe(([l,f])=>{f.role==="dialog"?(e.setAttribute("aria-controls",i),e.setAttribute("aria-haspopup","dialog")):e.setAttribute("aria-describedby",i)}),a.pipe(g(l=>!l)).subscribe(()=>{e.removeAttribute("aria-controls"),e.removeAttribute("aria-describedby"),e.removeAttribute("aria-haspopup")}),Ba(e,r).pipe(O(l=>s.next(l)),A(()=>s.complete()),m(l=>P({ref:e},l)))})}function Xe(e,{viewport$:t},r=document.body){return Vt(e,{content$:new F(o=>{let n=e.title,i=Cn(n);return o.next(i),e.removeAttribute("title"),r.append(i),()=>{i.remove(),e.setAttribute("title",n)}}),viewport$:t},0)}function Ga(e,t){let r=H(()=>z([mn(e),Ge(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:s,height:a}=de(e);return{x:o-i.x+s/2,y:n-i.y+a/2}}));return Ye(e).pipe(b(o=>r.pipe(m(n=>({active:o,offset:n})),Ee(+!o||1/0))))}function Dn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return H(()=>{let i=new T,s=i.pipe(oe(),ae(!0));return i.subscribe({next({offset:a}){e.style.setProperty("--md-tooltip-x",`${a.x}px`),e.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),mt(e).pipe(W(s)).subscribe(a=>{e.toggleAttribute("data-md-visible",a)}),L(i.pipe(g(({active:a})=>a)),i.pipe(Ae(250),g(({active:a})=>!a))).subscribe({next({active:a}){a?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe($e(16,ye)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(gt(125,ye),g(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?e.style.setProperty("--md-tooltip-0",`${-a}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(W(s),g(a=>!(a.metaKey||a.ctrlKey))).subscribe(a=>{a.stopPropagation(),a.preventDefault()}),h(n,"mousedown").pipe(W(s),te(i)).subscribe(([a,{active:c}])=>{var p;if(a.button!==0||a.metaKey||a.ctrlKey)a.preventDefault();else if(c){a.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(p=Ne())==null||p.blur()}}),r.pipe(W(s),g(a=>a===o),nt(125)).subscribe(()=>e.focus()),Ga(e,t).pipe(O(a=>i.next(a)),A(()=>i.complete()),m(a=>P({ref:e},a)))})}function Ja(e){let t=Te();if(e.tagName!=="CODE")return[e];let r=[".c",".c1",".cm"];if(t.annotate&&typeof t.annotate=="object"){let o=e.closest("[class|=language]");if(o)for(let n of Array.from(o.classList)){if(!n.startsWith("language-"))continue;let[,i]=n.split("-");i in t.annotate&&r.push(...t.annotate[i])}}return M(r.join(", "),e)}function Xa(e){let t=[];for(let r of Ja(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let s;for(;s=/(\(\d+\))(!)?/.exec(i.textContent);){let[,a,c]=s;if(typeof c=="undefined"){let p=i.splitText(s.index);i=p.splitText(a.length),t.push(p)}else{i.textContent=a,t.push(i);break}}}}return t}function Vn(e,t){t.append(...Array.from(e.childNodes))}function Tr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,s=new Map;for(let a of Xa(t)){let[,c]=a.textContent.match(/\((\d+)\)/);ue(`:scope > li:nth-child(${c})`,e)&&(s.set(c,kn(c,i)),a.replaceWith(s.get(c)))}return s.size===0?y:H(()=>{let a=new T,c=a.pipe(oe(),ae(!0)),p=[];for(let[l,f]of s)p.push([j(".md-typeset",f),j(`:scope > li:nth-child(${l})`,e)]);return o.pipe(W(c)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of p)l?Vn(f,u):Vn(u,f)}),L(...[...s].map(([,l])=>Dn(l,t,{target$:r}))).pipe(A(()=>a.complete()),le())})}function Nn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return Nn(t)}}function zn(e,t){return H(()=>{let r=Nn(e);return typeof r!="undefined"?Tr(r,e,t):y})}var Kn=$t(ao());var Za=0,qn=L(h(window,"keydown").pipe(m(()=>!0)),L(h(window,"keyup"),h(window,"contextmenu")).pipe(m(()=>!1))).pipe(Q(!1),Z(1));function Qn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return Qn(t)}}function es(e){return Le(e).pipe(m(({width:t})=>({scrollable:At(e).width>t})),ne("scrollable"))}function Yn(e,t){let{matches:r}=matchMedia("(hover)"),o=H(()=>{let n=new T,i=n.pipe(Yr(1));n.subscribe(({scrollable:d})=>{d&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let s=[],a=e.closest("pre"),c=a.closest("[id]"),p=c?c.id:Za++;a.id=`__code_${p}`;let l=[],f=e.closest(".highlight");if(f instanceof HTMLElement){let d=Qn(f);if(typeof d!="undefined"&&(f.classList.contains("annotate")||V("content.code.annotate"))){let v=Tr(d,e,t);l.push(Le(f).pipe(W(i),m(({width:S,height:X})=>S&&X),Y(),b(S=>S?v:y)))}}let u=M(":scope > span[id]",e);if(u.length&&(e.classList.add("md-code__content"),e.closest(".select")||V("content.code.select")&&!e.closest(".no-select"))){let d=+u[0].id.split("-").pop(),v=$n();s.push(v),V("content.tooltips")&&l.push(Xe(v,{viewport$}));let S=h(v,"click").pipe(Ut(R=>!R,!1),O(()=>v.blur()),le());S.subscribe(R=>{v.classList.toggle("md-code__button--active",R)});let X=fe(u).pipe(J(R=>it(R).pipe(m(se=>[R,se]))));S.pipe(b(R=>R?X:y)).subscribe(([R,se])=>{let ce=ue(".hll.select",R);if(ce&&!se)ce.replaceWith(...Array.from(ce.childNodes));else if(!ce&&se){let he=document.createElement("span");he.className="hll select",he.append(...Array.from(R.childNodes).slice(1)),R.append(he)}});let re=fe(u).pipe(J(R=>h(R,"mousedown").pipe(O(se=>se.preventDefault()),m(()=>R)))),ee=S.pipe(b(R=>R?re:y),te(qn),m(([R,se])=>{var he;let ce=u.indexOf(R)+d;if(se===!1)return[ce,ce];{let Se=M(".hll",e).map(Ue=>u.indexOf(Ue.parentElement)+d);return(he=window.getSelection())==null||he.removeAllRanges(),[Math.min(ce,...Se),Math.max(ce,...Se)]}})),k=Zr(y).pipe(g(R=>R.startsWith(`__codelineno-${p}-`)));k.subscribe(R=>{let[,,se]=R.split("-"),ce=se.split(":").map(Se=>+Se-d+1);ce.length===1&&ce.push(ce[0]);for(let Se of M(".hll:not(.select)",e))Se.replaceWith(...Array.from(Se.childNodes));let he=u.slice(ce[0]-1,ce[1]);for(let Se of he){let Ue=document.createElement("span");Ue.className="hll",Ue.append(...Array.from(Se.childNodes).slice(1)),Se.append(Ue)}}),k.pipe(Ee(1),xe(pe)).subscribe(R=>{if(R.includes(":")){let se=document.getElementById(R.split(":")[0]);se&&setTimeout(()=>{let ce=se,he=-64;for(;ce!==document.body;)he+=ce.offsetTop,ce=ce.offsetParent;window.scrollTo({top:he})},1)}});let je=fe(M('a[href^="#__codelineno"]',f)).pipe(J(R=>h(R,"click").pipe(O(se=>se.preventDefault()),m(()=>R)))).pipe(W(i),te(qn),m(([R,se])=>{let he=+j(`[id="${R.hash.slice(1)}"]`).parentElement.id.split("-").pop();if(se===!1)return[he,he];{let Se=M(".hll",e).map(Ue=>+Ue.parentElement.id.split("-").pop());return[Math.min(he,...Se),Math.max(he,...Se)]}}));L(ee,je).subscribe(R=>{let se=`#__codelineno-${p}-`;R[0]===R[1]?se+=R[0]:se+=`${R[0]}:${R[1]}`,history.replaceState({},"",se),window.dispatchEvent(new HashChangeEvent("hashchange",{newURL:window.location.origin+window.location.pathname+se,oldURL:window.location.href}))})}if(Kn.default.isSupported()&&(e.closest(".copy")||V("content.code.copy")&&!e.closest(".no-copy"))){let d=Hn(a.id);s.push(d),V("content.tooltips")&&l.push(Xe(d,{viewport$}))}if(s.length){let d=Pn();d.append(...s),a.insertBefore(d,e)}return es(e).pipe(O(d=>n.next(d)),A(()=>n.complete()),m(d=>P({ref:e},d)),Ve(L(...l).pipe(W(i))))});return V("content.lazy")?mt(e).pipe(g(n=>n),Ee(1),b(()=>o)):o}function ts(e,{target$:t,print$:r}){let o=!0;return L(t.pipe(m(n=>n.closest("details:not([open])")),g(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(g(n=>n||!o),O(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Bn(e,t){return H(()=>{let r=new T;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),ts(e,t).pipe(O(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))})}var Gn=0;function rs(e){let t=document.createElement("h3");t.innerHTML=e.innerHTML;let r=[t],o=e.nextElementSibling;for(;o&&!(o instanceof HTMLHeadingElement);)r.push(o),o=o.nextElementSibling;return r}function os(e,t){for(let r of M("[href], [src]",e))for(let o of["href","src"]){let n=r.getAttribute(o);if(n&&!/^(?:[a-z]+:)?\/\//i.test(n)){r[o]=new URL(r.getAttribute(o),t).toString();break}}for(let r of M("[name^=__], [for]",e))for(let o of["id","for","name"]){let n=r.getAttribute(o);n&&r.setAttribute(o,`${n}$preview_${Gn}`)}return Gn++,$(e)}function Jn(e,t){let{sitemap$:r}=t;if(!(e instanceof HTMLAnchorElement))return y;if(!(V("navigation.instant.preview")||e.hasAttribute("data-preview")))return y;e.removeAttribute("title");let o=z([Ye(e),it(e)]).pipe(m(([i,s])=>i||s),Y(),g(i=>i));return rt([r,o]).pipe(b(([i])=>{let s=new URL(e.href);return s.search=s.hash="",i.has(`${s}`)?$(s):y}),b(i=>xr(i).pipe(b(s=>os(s,i)))),b(i=>{let s=e.hash?`article [id="${e.hash.slice(1)}"]`:"article h1",a=ue(s,i);return typeof a=="undefined"?y:$(rs(a))})).pipe(b(i=>{let s=new F(a=>{let c=wr(...i);return a.next(c),document.body.append(c),()=>c.remove()});return Vt(e,P({content$:s},t))}))}var Xn=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.flowchartTitleText{fill:var(--md-mermaid-label-fg-color)}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel p,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel p{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color)}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}.classDiagramTitleText{fill:var(--md-mermaid-label-fg-color)}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs marker.marker.composition.class path,defs marker.marker.dependency.class path,defs marker.marker.extension.class path{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs marker.marker.aggregation.class path{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}.statediagramTitleText{fill:var(--md-mermaid-label-fg-color)}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel,.nodeLabel p{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}a .nodeLabel{text-decoration:underline}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}[id^=entity] path,[id^=entity] rect{fill:var(--md-default-bg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs .marker.oneOrMore.er *,defs .marker.onlyOne.er *,defs .marker.zeroOrMore.er *,defs .marker.zeroOrOne.er *{stroke:var(--md-mermaid-edge-color)!important}text:not([class]):last-child{fill:var(--md-mermaid-label-fg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var so,is=0;function as(){return typeof mermaid=="undefined"||mermaid instanceof Element?_t("https://unpkg.com/mermaid@11/dist/mermaid.min.js"):$(void 0)}function Zn(e){return e.classList.remove("mermaid"),so||(so=as().pipe(O(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Xn,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),Z(1))),so.subscribe(()=>go(null,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${is++}`,r=x("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),s=r.attachShadow({mode:"closed"});s.innerHTML=n,e.replaceWith(r),i==null||i(s)})),so.pipe(m(()=>({ref:e})))}var ei=x("table");function ti(e){return e.replaceWith(ei),ei.replaceWith(Un(e)),$({ref:e})}function ss(e){let t=e.find(r=>r.checked)||e[0];return L(...e.map(r=>h(r,"change").pipe(m(()=>j(`label[for="${r.id}"]`))))).pipe(Q(j(`label[for="${t.id}"]`)),m(r=>({active:r})))}function ri(e,{viewport$:t,target$:r}){let o=j(".tabbed-labels",e),n=M(":scope > input",e),i=no("prev");e.append(i);let s=no("next");return e.append(s),H(()=>{let a=new T,c=a.pipe(oe(),ae(!0));z([a,Le(e),mt(e)]).pipe(W(c),$e(1,ye)).subscribe({next([{active:p},l]){let f=Be(p),{width:u}=de(p);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let d=gr(o);(f.xd.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),z([Ge(o),Le(o)]).pipe(W(c)).subscribe(([p,l])=>{let f=At(o);i.hidden=p.x<16,s.hidden=p.x>f.width-l.width-16}),L(h(i,"click").pipe(m(()=>-1)),h(s,"click").pipe(m(()=>1))).pipe(W(c)).subscribe(p=>{let{width:l}=de(o);o.scrollBy({left:l*p,behavior:"smooth"})}),r.pipe(W(c),g(p=>n.includes(p))).subscribe(p=>p.click()),o.classList.add("tabbed-labels--linked");for(let p of n){let l=j(`label[for="${p.id}"]`);l.replaceChildren(x("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),h(l.firstElementChild,"click").pipe(W(c),g(f=>!(f.metaKey||f.ctrlKey)),O(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return V("content.tabs.link")&&a.pipe(Ie(1),te(t)).subscribe(([{active:p},{offset:l}])=>{let f=p.innerText.trim();if(p.hasAttribute("data-md-switching"))p.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let v of M("[data-tabs]"))for(let S of M(":scope > input",v)){let X=j(`label[for="${S.id}"]`);if(X!==p&&X.innerText.trim()===f){X.setAttribute("data-md-switching",""),S.click();break}}window.scrollTo({top:e.offsetTop-u});let d=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...d])])}}),a.pipe(W(c)).subscribe(()=>{for(let p of M("audio, video",e))p.offsetWidth&&p.autoplay?p.play().catch(()=>{}):p.pause()}),ss(n).pipe(O(p=>a.next(p)),A(()=>a.complete()),m(p=>P({ref:e},p)))}).pipe(et(pe))}function oi(e,t){let{viewport$:r,target$:o,print$:n}=t;return L(...M(".annotate:not(.highlight)",e).map(i=>zn(i,{target$:o,print$:n})),...M("pre:not(.mermaid) > code",e).map(i=>Yn(i,{target$:o,print$:n})),...M("a",e).map(i=>Jn(i,t)),...M("pre.mermaid",e).map(i=>Zn(i)),...M("table:not([class])",e).map(i=>ti(i)),...M("details",e).map(i=>Bn(i,{target$:o,print$:n})),...M("[data-tabs]",e).map(i=>ri(i,{viewport$:r,target$:o})),...M("[title]:not([data-preview])",e).filter(()=>V("content.tooltips")).map(i=>Xe(i,{viewport$:r})),...M(".footnote-ref",e).filter(()=>V("content.footnote.tooltips")).map(i=>Vt(i,{content$:new F(s=>{let a=new URL(i.href).hash.slice(1),c=Array.from(document.getElementById(a).cloneNode(!0).children),p=wr(...c);return s.next(p),document.body.append(p),()=>p.remove()}),viewport$:r})))}function cs(e,{alert$:t}){return t.pipe(b(r=>L($(!0),$(!1).pipe(nt(2e3))).pipe(m(o=>({message:r,active:o})))))}function ni(e,t){let r=j(".md-typeset",e);return H(()=>{let o=new T;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),cs(e,t).pipe(O(n=>o.next(n)),A(()=>o.complete()),m(n=>P({ref:e},n)))})}var ps=0;function ls(e,t){document.body.append(e);let{width:r}=de(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=vr(t),n=typeof o!="undefined"?Ge(o):$({x:0,y:0}),i=L(Ye(t),it(t)).pipe(Y());return z([i,n]).pipe(m(([s,a])=>{let{x:c,y:p}=Be(t),l=de(t),f=t.closest("table");return f&&t.parentElement&&(c+=f.offsetLeft+t.parentElement.offsetLeft,p+=f.offsetTop+t.parentElement.offsetTop),{active:s,offset:{x:c-a.x+l.width/2-r/2,y:p-a.y+l.height+8}}}))}function ii(e){let t=e.title;if(!t.length)return y;let r=`__tooltip_${ps++}`,o=Dt(r,"inline"),n=j(".md-typeset",o);return n.innerHTML=t,H(()=>{let i=new T;return i.subscribe({next({offset:s}){o.style.setProperty("--md-tooltip-x",`${s.x}px`),o.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),L(i.pipe(g(({active:s})=>s)),i.pipe(Ae(250),g(({active:s})=>!s))).subscribe({next({active:s}){s?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe($e(16,ye)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(gt(125,ye),g(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?o.style.setProperty("--md-tooltip-0",`${-s}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),ls(o,e).pipe(O(s=>i.next(s)),A(()=>i.complete()),m(s=>P({ref:e},s)))}).pipe(et(pe))}function ms({viewport$:e}){if(!V("header.autohide"))return $(!1);let t=e.pipe(m(({offset:{y:n}})=>n),ot(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),Y()),o=Je("search");return z([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),Y(),b(n=>n?r:$(!1)),Q(!1))}function ai(e,t){return H(()=>z([Le(e),ms(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),Y((r,o)=>r.height===o.height&&r.hidden===o.hidden),Z(1))}function si(e,{header$:t,main$:r}){return H(()=>{let o=new T,n=o.pipe(oe(),ae(!0));o.pipe(ne("active"),Pe(t)).subscribe(([{active:s},{hidden:a}])=>{e.classList.toggle("md-header--shadow",s&&!a),e.hidden=a});let i=fe(M("[title]",e)).pipe(g(()=>V("content.tooltips")),J(s=>ii(s)));return r.subscribe(o),t.pipe(W(n),m(s=>P({ref:e},s)),Ve(i.pipe(W(n))))})}function fs(e,{viewport$:t,header$:r}){return Er(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=de(e);return{active:n>0&&o>=n}}),ne("active"))}function ci(e,t){return H(()=>{let r=new T;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=ue(".md-content h1");return typeof o=="undefined"?y:fs(o,t).pipe(O(n=>r.next(n)),A(()=>r.complete()),m(n=>P({ref:e},n)))})}function pi(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),Y()),n=o.pipe(b(()=>Le(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),ne("bottom"))));return z([o,n,t]).pipe(m(([i,{top:s,bottom:a},{offset:{y:c},size:{height:p}}])=>(p=Math.max(0,p-Math.max(0,s-c,i)-Math.max(0,p+c-a)),{offset:s-i,height:p,active:s-i<=c})),Y((i,s)=>i.offset===s.offset&&i.height===s.height&&i.active===s.active))}function us(e){let t=__md_get("__palette")||{index:e.findIndex(o=>matchMedia(o.getAttribute("data-md-color-media")).matches)},r=Math.max(0,Math.min(t.index,e.length-1));return $(...e).pipe(J(o=>h(o,"change").pipe(m(()=>o))),Q(e[r]),m(o=>({index:e.indexOf(o),color:{media:o.getAttribute("data-md-color-media"),scheme:o.getAttribute("data-md-color-scheme"),primary:o.getAttribute("data-md-color-primary"),accent:o.getAttribute("data-md-color-accent")}})),Z(1))}function li(e){let t=M("input",e),r=x("meta",{name:"theme-color"});document.head.appendChild(r);let o=x("meta",{name:"color-scheme"});document.head.appendChild(o);let n=Wt("(prefers-color-scheme: light)");return H(()=>{let i=new T;return i.subscribe(s=>{if(document.body.setAttribute("data-md-color-switching",""),s.color.media==="(prefers-color-scheme)"){let a=matchMedia("(prefers-color-scheme: light)"),c=document.querySelector(a.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");s.color.scheme=c.getAttribute("data-md-color-scheme"),s.color.primary=c.getAttribute("data-md-color-primary"),s.color.accent=c.getAttribute("data-md-color-accent")}for(let[a,c]of Object.entries(s.color))document.body.setAttribute(`data-md-color-${a}`,c);for(let a=0;as.key==="Enter"),te(i,(s,a)=>a)).subscribe(({index:s})=>{s=(s+1)%t.length,t[s].click(),t[s].focus()}),i.pipe(m(()=>{let s=Ce("header"),a=window.getComputedStyle(s);return o.content=a.colorScheme,a.backgroundColor.match(/\d+/g).map(c=>(+c).toString(16).padStart(2,"0")).join("")})).subscribe(s=>r.content=`#${s}`),i.pipe(xe(pe)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),us(t).pipe(W(n.pipe(Ie(1))),vt(),O(s=>i.next(s)),A(()=>i.complete()),m(s=>P({ref:e},s)))})}function mi(e,{progress$:t}){return H(()=>{let r=new T;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(O(o=>r.next({value:o})),A(()=>r.complete()),m(o=>({ref:e,value:o})))})}function fi(e,t){return e.protocol=t.protocol,e.hostname=t.hostname,e}function ds(e,t){let r=new Map;for(let o of M("url",e)){let n=j("loc",o),i=[fi(new URL(n.textContent),t)];r.set(`${i[0]}`,i);for(let s of M("[rel=alternate]",o)){let a=s.getAttribute("href");a!=null&&i.push(fi(new URL(a),t))}}return r}function kt(e){return En(new URL("sitemap.xml",e)).pipe(m(t=>ds(t,new URL(e))),ve(()=>$(new Map)),le())}function ui({document$:e}){let t=new Map;e.pipe(b(()=>M("link[rel=alternate]")),m(r=>new URL(r.href)),g(r=>!t.has(r.toString())),J(r=>kt(r).pipe(m(o=>[r,o]),ve(()=>y)))).subscribe(([r,o])=>{t.set(r.toString().replace(/\/$/,""),o)}),h(document.body,"click").pipe(g(r=>!r.metaKey&&!r.ctrlKey),b(r=>{if(r.target instanceof Element){let o=r.target.closest("a");if(o&&!o.target){let n=[...t].find(([f])=>o.href.startsWith(`${f}/`));if(typeof n=="undefined")return y;let[i,s]=n,a=we();if(a.href.startsWith(i))return y;let c=Te(),p=a.href.replace(c.base,"");p=`${i}/${p}`;let l=s.has(p.split("#")[0])?new URL(p,c.base):new URL(i);return r.preventDefault(),$(l)}}return y})).subscribe(r=>st(r,!0))}var co=$t(ao());function hs(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function di({alert$:e}){co.default.isSupported()&&new F(t=>{new co.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||hs(j(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(O(t=>{t.trigger.focus()}),m(()=>Me("clipboard.copied"))).subscribe(e)}function hi(e,t){if(!(e.target instanceof Element))return y;let r=e.target.closest("a");if(r===null)return y;if(r.target||e.metaKey||e.ctrlKey)return y;let o=new URL(r.href);return o.search=o.hash="",t.has(`${o}`)?(e.preventDefault(),$(r)):y}function bi(e){let t=new Map;for(let r of M(":scope > *",e.head))t.set(r.outerHTML,r);return t}function vi(e){for(let t of M("[href], [src]",e))for(let r of["href","src"]){let o=t.getAttribute(r);if(o&&!/^(?:[a-z]+:)?\/\//i.test(o)){t[r]=t[r];break}}return $(e)}function bs(e){for(let o of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...V("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let n=ue(o),i=ue(o,e);typeof n!="undefined"&&typeof i!="undefined"&&n.replaceWith(i)}let t=bi(document);for(let[o,n]of bi(e))t.has(o)?t.delete(o):document.head.appendChild(n);for(let o of t.values()){let n=o.getAttribute("name");n!=="theme-color"&&n!=="color-scheme"&&o.remove()}let r=Ce("container");return Ke(M("script",r)).pipe(b(o=>{let n=e.createElement("script");if(o.src){for(let i of o.getAttributeNames())n.setAttribute(i,o.getAttribute(i));return o.replaceWith(n),new F(i=>{n.onload=()=>i.complete()})}else return n.textContent=o.textContent,o.replaceWith(n),y}),oe(),ae(document))}function gi({sitemap$:e,location$:t,viewport$:r,progress$:o}){if(location.protocol==="file:")return y;$(document).subscribe(vi);let n=h(document.body,"click").pipe(Pe(e),b(([a,c])=>hi(a,c)),m(({href:a})=>new URL(a)),le()),i=h(window,"popstate").pipe(m(we),le());n.pipe(te(r)).subscribe(([a,{offset:c}])=>{history.replaceState(c,""),history.pushState(null,"",a)}),L(n,i).subscribe(t);let s=t.pipe(ne("pathname"),b(a=>xr(a,{progress$:o}).pipe(ve(()=>(st(a,!0),y)))),b(vi),b(bs),le());return L(s.pipe(te(t,(a,c)=>c)),s.pipe(b(()=>t),ne("hash")),t.pipe(Y((a,c)=>a.pathname===c.pathname&&a.hash===c.hash),b(()=>n),O(()=>history.back()))).subscribe(a=>{var c,p;history.state!==null||!a.hash?window.scrollTo(0,(p=(c=history.state)==null?void 0:c.y)!=null?p:0):(history.scrollRestoration="auto",gn(a.hash),history.scrollRestoration="manual")}),t.subscribe(()=>{history.scrollRestoration="manual"}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),r.pipe(ne("offset"),Ae(100)).subscribe(({offset:a})=>{history.replaceState(a,"")}),V("navigation.instant.prefetch")&&L(h(document.body,"mousemove"),h(document.body,"focusin")).pipe(Pe(e),b(([a,c])=>hi(a,c)),Ae(25),Qr(({href:a})=>a),hr(a=>{let c=document.createElement("link");return c.rel="prefetch",c.href=a.toString(),document.head.appendChild(c),h(c,"load").pipe(m(()=>c),Ee(1))})).subscribe(a=>a.remove()),s}var yi=$t(ro());function xi(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,s)=>`${i}${s}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").replace(/&/g,"&").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return s=>(0,yi.default)(s).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function zt(e){return e.type===1}function Sr(e){return e.type===3}function Ei(e,t){let r=Mn(e);return L($(location.protocol!=="file:"),Je("search")).pipe(Re(o=>o),b(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:V("search.suggest")}}})),r}function wi(e){var l;let{selectedVersionSitemap:t,selectedVersionBaseURL:r,currentLocation:o,currentBaseURL:n}=e,i=(l=po(n))==null?void 0:l.pathname;if(i===void 0)return;let s=ys(o.pathname,i);if(s===void 0)return;let a=Es(t.keys());if(!t.has(a))return;let c=po(s,a);if(!c||!t.has(c.href))return;let p=po(s,r);if(p)return p.hash=o.hash,p.search=o.search,p}function po(e,t){try{return new URL(e,t)}catch(r){return}}function ys(e,t){if(e.startsWith(t))return e.slice(t.length)}function xs(e,t){let r=Math.min(e.length,t.length),o;for(o=0;oy)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:s,aliases:a})=>s===i||a.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),b(n=>h(document.body,"click").pipe(g(i=>!i.metaKey&&!i.ctrlKey),te(o),b(([i,s])=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&n.has(a.href)){let c=a.href;return!i.target.closest(".md-version")&&n.get(c)===s?y:(i.preventDefault(),$(new URL(c)))}}return y}),b(i=>kt(i).pipe(m(s=>{var a;return(a=wi({selectedVersionSitemap:s,selectedVersionBaseURL:i,currentLocation:we(),currentBaseURL:t.base}))!=null?a:i})))))).subscribe(n=>st(n,!0)),z([r,o]).subscribe(([n,i])=>{j(".md-header__topic").appendChild(Wn(n,i))}),e.pipe(b(()=>o)).subscribe(n=>{var a;let i=new URL(t.base),s=__md_get("__outdated",sessionStorage,i);if(s===null){s=!0;let c=((a=t.version)==null?void 0:a.default)||"latest";Array.isArray(c)||(c=[c]);e:for(let p of c)for(let l of n.aliases.concat(n.version))if(new RegExp(p,"i").test(l)){s=!1;break e}__md_set("__outdated",s,sessionStorage,i)}if(s)for(let c of me("outdated"))c.hidden=!1})}function ws(e,{worker$:t}){let{searchParams:r}=we();r.has("q")&&(at("search",!0),e.value=r.get("q"),e.focus(),Je("search").pipe(Re(i=>!i)).subscribe(()=>{let i=we();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=Ye(e),n=L(t.pipe(Re(zt)),h(e,"keyup"),o).pipe(m(()=>e.value),Y());return z([n,o]).pipe(m(([i,s])=>({value:i,focus:s})),Z(1))}function Si(e,{worker$:t}){let r=new T,o=r.pipe(oe(),ae(!0));z([t.pipe(Re(zt)),r],(i,s)=>s).pipe(ne("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(ne("focus")).subscribe(({focus:i})=>{i&&at("search",i)}),h(e.form,"reset").pipe(W(o)).subscribe(()=>e.focus());let n=j("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),ws(e,{worker$:t}).pipe(O(i=>r.next(i)),A(()=>r.complete()),m(i=>P({ref:e},i)),Z(1))}function Oi(e,{worker$:t,query$:r}){let o=new T,n=un(e.parentElement).pipe(g(Boolean)),i=e.parentElement,s=j(":scope > :first-child",e),a=j(":scope > :last-child",e);Je("search").subscribe(l=>{a.setAttribute("role",l?"list":"presentation"),a.hidden=!l}),o.pipe(te(r),Gr(t.pipe(Re(zt)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:s.textContent=f.length?Me("search.result.none"):Me("search.result.placeholder");break;case 1:s.textContent=Me("search.result.one");break;default:let u=br(l.length);s.textContent=Me("search.result.other",u)}});let c=o.pipe(O(()=>a.innerHTML=""),b(({items:l})=>L($(...l.slice(0,10)),$(...l.slice(10)).pipe(ot(4),Xr(n),b(([f])=>f)))),m(Fn),le());return c.subscribe(l=>a.appendChild(l)),c.pipe(J(l=>{let f=ue("details",l);return typeof f=="undefined"?y:h(f,"toggle").pipe(W(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(g(Sr),m(({data:l})=>l)).pipe(O(l=>o.next(l)),A(()=>o.complete()),m(l=>P({ref:e},l)))}function Ts(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=we();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function Li(e,t){let r=new T,o=r.pipe(oe(),ae(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(W(o)).subscribe(n=>n.preventDefault()),Ts(e,t).pipe(O(n=>r.next(n)),A(()=>r.complete()),m(n=>P({ref:e},n)))}function Mi(e,{worker$:t,keyboard$:r}){let o=new T,n=Ce("search-query"),i=L(h(n,"keydown"),h(n,"focus")).pipe(xe(pe),m(()=>n.value),Y());return o.pipe(Pe(i),m(([{suggest:a},c])=>{let p=c.split(/([\s-]+)/);if(a!=null&&a.length&&p[p.length-1]){let l=a[a.length-1];l.startsWith(p[p.length-1])&&(p[p.length-1]=l)}else p.length=0;return p})).subscribe(a=>e.innerHTML=a.join("").replace(/\s/g," ")),r.pipe(g(({mode:a})=>a==="search")).subscribe(a=>{switch(a.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(g(Sr),m(({data:a})=>a)).pipe(O(a=>o.next(a)),A(()=>o.complete()),m(()=>({ref:e})))}function _i(e,{index$:t,keyboard$:r}){let o=Te();try{let n=Ei(o.search,t),i=Ce("search-query",e),s=Ce("search-result",e);h(e,"click").pipe(g(({target:c})=>c instanceof Element&&!!c.closest("a"))).subscribe(()=>at("search",!1)),r.pipe(g(({mode:c})=>c==="search")).subscribe(c=>{let p=Ne();switch(c.type){case"Enter":if(p===i){let l=new Map;for(let f of M(":first-child [href]",s)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}c.claim()}break;case"Escape":case"Tab":at("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof p=="undefined")i.focus();else{let l=[i,...M(":not(details) > [href], summary, details[open] [href]",s)],f=Math.max(0,(Math.max(0,l.indexOf(p))+l.length+(c.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}c.claim();break;default:i!==Ne()&&i.focus()}}),r.pipe(g(({mode:c})=>c==="global")).subscribe(c=>{switch(c.type){case"f":case"s":case"/":i.focus(),i.select(),c.claim();break}});let a=Si(i,{worker$:n});return L(a,Oi(s,{worker$:n,query$:a})).pipe(Ve(...me("search-share",e).map(c=>Li(c,{query$:a})),...me("search-suggest",e).map(c=>Mi(c,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,tt}}function Ai(e,{index$:t,location$:r}){return z([t,r.pipe(Q(we()),g(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>xi(o.config)(n.searchParams.get("h"))),m(o=>{var s;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let a=i.nextNode();a;a=i.nextNode())if((s=a.parentElement)!=null&&s.offsetHeight){let c=a.textContent,p=o(c);p.length>c.length&&n.set(a,p)}for(let[a,c]of n){let{childNodes:p}=x("span",null,c);a.replaceWith(...Array.from(p))}return{ref:e,nodes:n}}))}function Ss(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return z([r,t]).pipe(m(([{offset:i,height:s},{offset:{y:a}}])=>(s=s+Math.min(n,Math.max(0,a-i))-n,{height:s,locked:a>=i+n})),Y((i,s)=>i.height===s.height&&i.locked===s.locked))}function lo(e,o){var n=o,{header$:t}=n,r=vo(n,["header$"]);let i=j(".md-sidebar__scrollwrap",e),{y:s}=Be(i);return H(()=>{let a=new T,c=a.pipe(oe(),ae(!0)),p=a.pipe($e(0,ye));return p.pipe(te(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*s}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),p.pipe(Re()).subscribe(()=>{for(let l of M(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=de(f);f.scrollTo({top:u-d/2})}}}),fe(M("label[tabindex]",e)).pipe(J(l=>h(l,"click").pipe(xe(pe),m(()=>l),W(c)))).subscribe(l=>{let f=j(`[id="${l.htmlFor}"]`);j(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),V("content.tooltips")&&fe(M("abbr[title]",e)).pipe(J(l=>Xe(l,{viewport$})),W(c)).subscribe(),Ss(e,r).pipe(O(l=>a.next(l)),A(()=>a.complete()),m(l=>P({ref:e},l)))})}function Ci(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return rt(ze(`${r}/releases/latest`).pipe(ve(()=>y),m(o=>({version:o.tag_name})),Qe({})),ze(r).pipe(ve(()=>y),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),Qe({}))).pipe(m(([o,n])=>P(P({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return ze(r).pipe(m(o=>({repositories:o.public_repos})),Qe({}))}}function ki(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return rt(ze(`${r}/releases/permalink/latest`).pipe(ve(()=>y),m(({tag_name:o})=>({version:o})),Qe({})),ze(r).pipe(ve(()=>y),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),Qe({}))).pipe(m(([o,n])=>P(P({},o),n)))}function Hi(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return Ci(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return ki(r,o)}return y}var Os;function Ls(e){return Os||(Os=H(()=>{let t=__md_get("__source",sessionStorage);if(t)return $(t);if(me("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return y}return Hi(e.href).pipe(O(o=>__md_set("__source",o,sessionStorage)))}).pipe(ve(()=>y),g(t=>Object.keys(t).length>0),m(t=>({facts:t})),Z(1)))}function $i(e){let t=j(":scope > :last-child",e);return H(()=>{let r=new T;return r.subscribe(({facts:o})=>{t.appendChild(jn(o)),t.classList.add("md-source__repository--active")}),Ls(e).pipe(O(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))})}function Ms(e,{viewport$:t,header$:r}){return Le(document.body).pipe(b(()=>Er(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),ne("hidden"))}function Pi(e,t){return H(()=>{let r=new T;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(V("navigation.tabs.sticky")?$({hidden:!1}):Ms(e,t)).pipe(O(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))})}function _s(e,{viewport$:t,header$:r}){let o=new Map,n=M(".md-nav__link",e);for(let a of n){let c=decodeURIComponent(a.hash.substring(1)),p=ue(`[id="${c}"]`);typeof p!="undefined"&&o.set(a,p)}let i=r.pipe(ne("height"),m(({height:a})=>{let c=Ce("main"),p=j(":scope > :first-child",c);return a+.8*(p.offsetTop-c.offsetTop)}),le());return Le(document.body).pipe(ne("height"),b(a=>H(()=>{let c=[];return $([...o].reduce((p,[l,f])=>{for(;c.length&&o.get(c[c.length-1]).tagName>=f.tagName;)c.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return p.set([...c=[...c,l]].reverse(),u)},new Map))}).pipe(m(c=>new Map([...c].sort(([,p],[,l])=>p-l))),Pe(i),b(([c,p])=>t.pipe(Ut(([l,f],{offset:{y:u},size:d})=>{let v=u+d.height>=Math.floor(a.height);for(;f.length;){let[,S]=f[0];if(S-p=u&&!v)f=[l.pop(),...f];else break}return[l,f]},[[],[...c]]),Y((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([a,c])=>({prev:a.map(([p])=>p),next:c.map(([p])=>p)})),Q({prev:[],next:[]}),ot(2,1),m(([a,c])=>a.prev.length{let i=new T,s=i.pipe(oe(),ae(!0));if(i.subscribe(({prev:a,next:c})=>{for(let[p]of c)p.classList.remove("md-nav__link--passed"),p.classList.remove("md-nav__link--active");for(let[p,[l]]of a.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",p===a.length-1)}),V("toc.follow")){let a=L(t.pipe(Ae(1),m(()=>{})),t.pipe(Ae(250),m(()=>"smooth")));i.pipe(g(({prev:c})=>c.length>0),Pe(o.pipe(xe(pe))),te(a)).subscribe(([[{prev:c}],p])=>{let[l]=c[c.length-1];if(l.offsetHeight){let f=vr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=de(f);f.scrollTo({top:u-d/2,behavior:p})}}})}return V("navigation.tracking")&&t.pipe(W(s),ne("offset"),Ae(250),Ie(1),W(n.pipe(Ie(1))),vt({delay:250}),te(i)).subscribe(([,{prev:a}])=>{let c=we(),p=a[a.length-1];if(p&&p.length){let[l]=p,{hash:f}=new URL(l.href);c.hash!==f&&(c.hash=f,history.replaceState({},"",`${c}`))}else c.hash="",history.replaceState({},"",`${c}`)}),_s(e,{viewport$:t,header$:r}).pipe(O(a=>i.next(a)),A(()=>i.complete()),m(a=>P({ref:e},a)))})}function As(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:s}})=>s),ot(2,1),m(([s,a])=>s>a&&a>0),Y()),i=r.pipe(m(({active:s})=>s));return z([i,n]).pipe(m(([s,a])=>!(s&&a)),Y(),W(o.pipe(Ie(1))),ae(!0),vt({delay:250}),m(s=>({hidden:s})))}function Ii(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new T,s=i.pipe(oe(),ae(!0));return i.subscribe({next({hidden:a}){e.hidden=a,a?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(W(s),ne("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),h(e,"click").subscribe(a=>{a.preventDefault(),window.scrollTo({top:0})}),As(e,{viewport$:t,main$:o,target$:n}).pipe(O(a=>i.next(a)),A(()=>i.complete()),m(a=>P({ref:e},a)))}function Fi({document$:e,viewport$:t}){e.pipe(b(()=>M(".md-ellipsis")),J(r=>mt(r).pipe(W(e.pipe(Ie(1))),g(o=>o),m(()=>r),Ee(1))),g(r=>r.offsetWidth{let o=r.innerText,n=r.closest("a")||r;return n.title=o,V("content.tooltips")?Xe(n,{viewport$:t}).pipe(W(e.pipe(Ie(1))),A(()=>n.removeAttribute("title"))):y})).subscribe(),V("content.tooltips")&&e.pipe(b(()=>M(".md-status")),J(r=>Xe(r,{viewport$:t}))).subscribe()}function ji({document$:e,tablet$:t}){e.pipe(b(()=>M(".md-toggle--indeterminate")),O(r=>{r.indeterminate=!0,r.checked=!1}),J(r=>h(r,"change").pipe(Jr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),te(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function Cs(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Ui({document$:e}){e.pipe(b(()=>M("[data-md-scrollfix]")),O(t=>t.removeAttribute("data-md-scrollfix")),g(Cs),J(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function Wi({viewport$:e,tablet$:t}){z([Je("search"),t]).pipe(m(([r,o])=>r&&!o),b(r=>$(r).pipe(nt(r?400:100))),te(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function ks(){return location.protocol==="file:"?_t(`${new URL("search/search_index.js",Or.base)}`).pipe(m(()=>__index),Z(1)):ze(new URL("search/search_index.json",Or.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var ct=an(),Kt=bn(),Ht=yn(Kt),mo=hn(),ke=Ln(),Lr=Wt("(min-width: 60em)"),Vi=Wt("(min-width: 76.25em)"),Ni=xn(),Or=Te(),zi=document.forms.namedItem("search")?ks():tt,fo=new T;di({alert$:fo});ui({document$:ct});var uo=new T,qi=kt(Or.base);V("navigation.instant")&&gi({sitemap$:qi,location$:Kt,viewport$:ke,progress$:uo}).subscribe(ct);var Di;((Di=Or.version)==null?void 0:Di.provider)==="mike"&&Ti({document$:ct});L(Kt,Ht).pipe(nt(125)).subscribe(()=>{at("drawer",!1),at("search",!1)});mo.pipe(g(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=ue("link[rel=prev]");typeof t!="undefined"&&st(t);break;case"n":case".":let r=ue("link[rel=next]");typeof r!="undefined"&&st(r);break;case"Enter":let o=Ne();o instanceof HTMLLabelElement&&o.click()}});Fi({viewport$:ke,document$:ct});ji({document$:ct,tablet$:Lr});Ui({document$:ct});Wi({viewport$:ke,tablet$:Lr});var ft=ai(Ce("header"),{viewport$:ke}),qt=ct.pipe(m(()=>Ce("main")),b(e=>pi(e,{viewport$:ke,header$:ft})),Z(1)),Hs=L(...me("consent").map(e=>An(e,{target$:Ht})),...me("dialog").map(e=>ni(e,{alert$:fo})),...me("palette").map(e=>li(e)),...me("progress").map(e=>mi(e,{progress$:uo})),...me("search").map(e=>_i(e,{index$:zi,keyboard$:mo})),...me("source").map(e=>$i(e))),$s=H(()=>L(...me("announce").map(e=>_n(e)),...me("content").map(e=>oi(e,{sitemap$:qi,viewport$:ke,target$:Ht,print$:Ni})),...me("content").map(e=>V("search.highlight")?Ai(e,{index$:zi,location$:Kt}):y),...me("header").map(e=>si(e,{viewport$:ke,header$:ft,main$:qt})),...me("header-title").map(e=>ci(e,{viewport$:ke,header$:ft})),...me("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?eo(Vi,()=>lo(e,{viewport$:ke,header$:ft,main$:qt})):eo(Lr,()=>lo(e,{viewport$:ke,header$:ft,main$:qt}))),...me("tabs").map(e=>Pi(e,{viewport$:ke,header$:ft})),...me("toc").map(e=>Ri(e,{viewport$:ke,header$:ft,main$:qt,target$:Ht})),...me("top").map(e=>Ii(e,{viewport$:ke,header$:ft,main$:qt,target$:Ht})))),Ki=ct.pipe(b(()=>$s),Ve(Hs),Z(1));Ki.subscribe();window.document$=ct;window.location$=Kt;window.target$=Ht;window.keyboard$=mo;window.viewport$=ke;window.tablet$=Lr;window.screen$=Vi;window.print$=Ni;window.alert$=fo;window.progress$=uo;window.component$=Ki;})(); +//# sourceMappingURL=bundle.e71a0d61.min.js.map + diff --git a/assets/javascripts/bundle.e71a0d61.min.js.map b/assets/javascripts/bundle.e71a0d61.min.js.map new file mode 100644 index 00000000..23451b54 --- /dev/null +++ b/assets/javascripts/bundle.e71a0d61.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/escape-html/index.js", "node_modules/clipboard/dist/clipboard.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/tslib/tslib.es6.mjs", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/BehaviorSubject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/QueueAction.ts", "node_modules/rxjs/src/internal/scheduler/QueueScheduler.ts", "node_modules/rxjs/src/internal/scheduler/queue.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounce.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinct.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/exhaustMap.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip2/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/link/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/alternate/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/findurl/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*\n * Copyright (c) 2016-2025 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n fetchSitemap,\n setupAlternate,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 60em)\")\nconst screen$ = watchMedia(\"(min-width: 76.25em)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up language selector */\nsetupAlternate({ document$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up sitemap for instant navigation and previews */\nconst sitemap$ = fetchSitemap(config.base)\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ sitemap$, location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ viewport$, document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { sitemap$, viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/******************************************************************************\nCopyright (c) Microsoft Corporation.\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\nPERFORMANCE OF THIS SOFTWARE.\n***************************************************************************** */\n/* global Reflect, Promise, SuppressedError, Symbol, Iterator */\n\nvar extendStatics = function(d, b) {\n extendStatics = Object.setPrototypeOf ||\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\n return extendStatics(d, b);\n};\n\nexport function __extends(d, b) {\n if (typeof b !== \"function\" && b !== null)\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\n extendStatics(d, b);\n function __() { this.constructor = d; }\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\n}\n\nexport var __assign = function() {\n __assign = Object.assign || function __assign(t) {\n for (var s, i = 1, n = arguments.length; i < n; i++) {\n s = arguments[i];\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\n }\n return t;\n }\n return __assign.apply(this, arguments);\n}\n\nexport function __rest(s, e) {\n var t = {};\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\n t[p] = s[p];\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\n t[p[i]] = s[p[i]];\n }\n return t;\n}\n\nexport function __decorate(decorators, target, key, desc) {\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\n return c > 3 && r && Object.defineProperty(target, key, r), r;\n}\n\nexport function __param(paramIndex, decorator) {\n return function (target, key) { decorator(target, key, paramIndex); }\n}\n\nexport function __esDecorate(ctor, descriptorIn, decorators, contextIn, initializers, extraInitializers) {\n function accept(f) { if (f !== void 0 && typeof f !== \"function\") throw new TypeError(\"Function expected\"); return f; }\n var kind = contextIn.kind, key = kind === \"getter\" ? \"get\" : kind === \"setter\" ? \"set\" : \"value\";\n var target = !descriptorIn && ctor ? contextIn[\"static\"] ? ctor : ctor.prototype : null;\n var descriptor = descriptorIn || (target ? Object.getOwnPropertyDescriptor(target, contextIn.name) : {});\n var _, done = false;\n for (var i = decorators.length - 1; i >= 0; i--) {\n var context = {};\n for (var p in contextIn) context[p] = p === \"access\" ? {} : contextIn[p];\n for (var p in contextIn.access) context.access[p] = contextIn.access[p];\n context.addInitializer = function (f) { if (done) throw new TypeError(\"Cannot add initializers after decoration has completed\"); extraInitializers.push(accept(f || null)); };\n var result = (0, decorators[i])(kind === \"accessor\" ? { get: descriptor.get, set: descriptor.set } : descriptor[key], context);\n if (kind === \"accessor\") {\n if (result === void 0) continue;\n if (result === null || typeof result !== \"object\") throw new TypeError(\"Object expected\");\n if (_ = accept(result.get)) descriptor.get = _;\n if (_ = accept(result.set)) descriptor.set = _;\n if (_ = accept(result.init)) initializers.unshift(_);\n }\n else if (_ = accept(result)) {\n if (kind === \"field\") initializers.unshift(_);\n else descriptor[key] = _;\n }\n }\n if (target) Object.defineProperty(target, contextIn.name, descriptor);\n done = true;\n};\n\nexport function __runInitializers(thisArg, initializers, value) {\n var useValue = arguments.length > 2;\n for (var i = 0; i < initializers.length; i++) {\n value = useValue ? initializers[i].call(thisArg, value) : initializers[i].call(thisArg);\n }\n return useValue ? value : void 0;\n};\n\nexport function __propKey(x) {\n return typeof x === \"symbol\" ? x : \"\".concat(x);\n};\n\nexport function __setFunctionName(f, name, prefix) {\n if (typeof name === \"symbol\") name = name.description ? \"[\".concat(name.description, \"]\") : \"\";\n return Object.defineProperty(f, \"name\", { configurable: true, value: prefix ? \"\".concat(prefix, \" \", name) : name });\n};\n\nexport function __metadata(metadataKey, metadataValue) {\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\n}\n\nexport function __awaiter(thisArg, _arguments, P, generator) {\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\n return new (P || (P = Promise))(function (resolve, reject) {\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\n step((generator = generator.apply(thisArg, _arguments || [])).next());\n });\n}\n\nexport function __generator(thisArg, body) {\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g = Object.create((typeof Iterator === \"function\" ? Iterator : Object).prototype);\n return g.next = verb(0), g[\"throw\"] = verb(1), g[\"return\"] = verb(2), typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\n function verb(n) { return function (v) { return step([n, v]); }; }\n function step(op) {\n if (f) throw new TypeError(\"Generator is already executing.\");\n while (g && (g = 0, op[0] && (_ = 0)), _) try {\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\n if (y = 0, t) op = [op[0] & 2, t.value];\n switch (op[0]) {\n case 0: case 1: t = op; break;\n case 4: _.label++; return { value: op[1], done: false };\n case 5: _.label++; y = op[1]; op = [0]; continue;\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\n default:\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\n if (t[2]) _.ops.pop();\n _.trys.pop(); continue;\n }\n op = body.call(thisArg, _);\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\n }\n}\n\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n var desc = Object.getOwnPropertyDescriptor(m, k);\n if (!desc || (\"get\" in desc ? !m.__esModule : desc.writable || desc.configurable)) {\n desc = { enumerable: true, get: function() { return m[k]; } };\n }\n Object.defineProperty(o, k2, desc);\n}) : (function(o, m, k, k2) {\n if (k2 === undefined) k2 = k;\n o[k2] = m[k];\n});\n\nexport function __exportStar(m, o) {\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\n}\n\nexport function __values(o) {\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\n if (m) return m.call(o);\n if (o && typeof o.length === \"number\") return {\n next: function () {\n if (o && i >= o.length) o = void 0;\n return { value: o && o[i++], done: !o };\n }\n };\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\n}\n\nexport function __read(o, n) {\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\n if (!m) return o;\n var i = m.call(o), r, ar = [], e;\n try {\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\n }\n catch (error) { e = { error: error }; }\n finally {\n try {\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\n }\n finally { if (e) throw e.error; }\n }\n return ar;\n}\n\n/** @deprecated */\nexport function __spread() {\n for (var ar = [], i = 0; i < arguments.length; i++)\n ar = ar.concat(__read(arguments[i]));\n return ar;\n}\n\n/** @deprecated */\nexport function __spreadArrays() {\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\n r[k] = a[j];\n return r;\n}\n\nexport function __spreadArray(to, from, pack) {\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\n if (ar || !(i in from)) {\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\n ar[i] = from[i];\n }\n }\n return to.concat(ar || Array.prototype.slice.call(from));\n}\n\nexport function __await(v) {\n return this instanceof __await ? (this.v = v, this) : new __await(v);\n}\n\nexport function __asyncGenerator(thisArg, _arguments, generator) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\n return i = Object.create((typeof AsyncIterator === \"function\" ? AsyncIterator : Object).prototype), verb(\"next\"), verb(\"throw\"), verb(\"return\", awaitReturn), i[Symbol.asyncIterator] = function () { return this; }, i;\n function awaitReturn(f) { return function (v) { return Promise.resolve(v).then(f, reject); }; }\n function verb(n, f) { if (g[n]) { i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; if (f) i[n] = f(i[n]); } }\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\n function fulfill(value) { resume(\"next\", value); }\n function reject(value) { resume(\"throw\", value); }\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\n}\n\nexport function __asyncDelegator(o) {\n var i, p;\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: false } : f ? f(v) : v; } : f; }\n}\n\nexport function __asyncValues(o) {\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\n var m = o[Symbol.asyncIterator], i;\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\n}\n\nexport function __makeTemplateObject(cooked, raw) {\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\n return cooked;\n};\n\nvar __setModuleDefault = Object.create ? (function(o, v) {\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\n}) : function(o, v) {\n o[\"default\"] = v;\n};\n\nexport function __importStar(mod) {\n if (mod && mod.__esModule) return mod;\n var result = {};\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\n __setModuleDefault(result, mod);\n return result;\n}\n\nexport function __importDefault(mod) {\n return (mod && mod.__esModule) ? mod : { default: mod };\n}\n\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\n}\n\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\n}\n\nexport function __classPrivateFieldIn(state, receiver) {\n if (receiver === null || (typeof receiver !== \"object\" && typeof receiver !== \"function\")) throw new TypeError(\"Cannot use 'in' operator on non-object\");\n return typeof state === \"function\" ? receiver === state : state.has(receiver);\n}\n\nexport function __addDisposableResource(env, value, async) {\n if (value !== null && value !== void 0) {\n if (typeof value !== \"object\" && typeof value !== \"function\") throw new TypeError(\"Object expected.\");\n var dispose, inner;\n if (async) {\n if (!Symbol.asyncDispose) throw new TypeError(\"Symbol.asyncDispose is not defined.\");\n dispose = value[Symbol.asyncDispose];\n }\n if (dispose === void 0) {\n if (!Symbol.dispose) throw new TypeError(\"Symbol.dispose is not defined.\");\n dispose = value[Symbol.dispose];\n if (async) inner = dispose;\n }\n if (typeof dispose !== \"function\") throw new TypeError(\"Object not disposable.\");\n if (inner) dispose = function() { try { inner.call(this); } catch (e) { return Promise.reject(e); } };\n env.stack.push({ value: value, dispose: dispose, async: async });\n }\n else if (async) {\n env.stack.push({ async: true });\n }\n return value;\n}\n\nvar _SuppressedError = typeof SuppressedError === \"function\" ? SuppressedError : function (error, suppressed, message) {\n var e = new Error(message);\n return e.name = \"SuppressedError\", e.error = error, e.suppressed = suppressed, e;\n};\n\nexport function __disposeResources(env) {\n function fail(e) {\n env.error = env.hasError ? new _SuppressedError(e, env.error, \"An error was suppressed during disposal.\") : e;\n env.hasError = true;\n }\n var r, s = 0;\n function next() {\n while (r = env.stack.pop()) {\n try {\n if (!r.async && s === 1) return s = 0, env.stack.push(r), Promise.resolve().then(next);\n if (r.dispose) {\n var result = r.dispose.call(r.value);\n if (r.async) return s |= 2, Promise.resolve(result).then(next, function(e) { fail(e); return next(); });\n }\n else s |= 1;\n }\n catch (e) {\n fail(e);\n }\n }\n if (s === 1) return env.hasError ? Promise.reject(env.error) : Promise.resolve();\n if (env.hasError) throw env.error;\n }\n return next();\n}\n\nexport default {\n __extends,\n __assign,\n __rest,\n __decorate,\n __param,\n __metadata,\n __awaiter,\n __generator,\n __createBinding,\n __exportStar,\n __values,\n __read,\n __spread,\n __spreadArrays,\n __spreadArray,\n __await,\n __asyncGenerator,\n __asyncDelegator,\n __asyncValues,\n __makeTemplateObject,\n __importStar,\n __importDefault,\n __classPrivateFieldGet,\n __classPrivateFieldSet,\n __classPrivateFieldIn,\n __addDisposableResource,\n __disposeResources,\n};\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n */\nexport class Subscription implements SubscriptionLike {\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param value The `next` value.\n */\n next(value: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param err The `error` exception.\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as ((value: T) => void) | undefined,\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent.\n * @param subscriber The stopped subscriber.\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @param subscribe The function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @param subscribe the subscriber function to be passed to the Observable constructor\n * @return A new observable.\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @param operator the operator defining the operation to take on the observable\n * @return A new observable with the Operator applied.\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param observerOrNext Either an {@link Observer} with some or all callback methods,\n * or the `next` handler that is called for each value emitted from the subscribed Observable.\n * @param error A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param complete A handler for a terminal event resulting from successful completion.\n * @return A subscription reference to the registered handlers.\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next A handler for each value emitted by the observable.\n * @return A promise that either resolves on observable completion or\n * rejects with the handled error.\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @return This instance of the observable.\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n *\n * @return The Observable result of all the operators having been called\n * in the order they were passed in.\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return Observable that this Subject casts to.\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { Subject } from './Subject';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\n\n/**\n * A variant of Subject that requires an initial value and emits its current\n * value whenever it is subscribed to.\n */\nexport class BehaviorSubject extends Subject {\n constructor(private _value: T) {\n super();\n }\n\n get value(): T {\n return this.getValue();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n const subscription = super._subscribe(subscriber);\n !subscription.closed && subscriber.next(this._value);\n return subscription;\n }\n\n getValue(): T {\n const { hasError, thrownError, _value } = this;\n if (hasError) {\n throw thrownError;\n }\n this._throwIfClosed();\n return _value;\n }\n\n next(value: T): void {\n super.next((this._value = value));\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param _bufferSize The size of the buffer to replay on subscription\n * @param _windowTime The amount of time the buffered items will stay buffered\n * @param _timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param state Some contextual data that the `work` function uses when called by the\n * Scheduler.\n * @param delay Time to wait before executing the work, where the time unit is implicit\n * and defined by the Scheduler.\n * @return A subscription in order to be able to unsubscribe the scheduled work.\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param work A function representing a task, or some unit of work to be\n * executed by the Scheduler.\n * @param delay Time to wait before executing the work, where the time unit is\n * implicit and defined by the Scheduler itself.\n * @param state Some contextual data that the `work` function uses when called\n * by the Scheduler.\n * @return A subscription in order to be able to unsubscribe the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { Subscription } from '../Subscription';\nimport { QueueScheduler } from './QueueScheduler';\nimport { SchedulerAction } from '../types';\nimport { TimerHandle } from './timerHandle';\n\nexport class QueueAction extends AsyncAction {\n constructor(protected scheduler: QueueScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (delay > 0) {\n return super.schedule(state, delay);\n }\n this.delay = delay;\n this.state = state;\n this.scheduler.flush(this);\n return this;\n }\n\n public execute(state: T, delay: number): any {\n return delay > 0 || this.closed ? super.execute(state, delay) : this._execute(state, delay);\n }\n\n protected requestAsyncId(scheduler: QueueScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n\n if ((delay != null && delay > 0) || (delay == null && this.delay > 0)) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n\n // Otherwise flush the scheduler starting with this action.\n scheduler.flush(this);\n\n // HACK: In the past, this was returning `void`. However, `void` isn't a valid\n // `TimerHandle`, and generally the return value here isn't really used. So the\n // compromise is to return `0` which is both \"falsy\" and a valid `TimerHandle`,\n // as opposed to refactoring every other instanceo of `requestAsyncId`.\n return 0;\n }\n}\n", "import { AsyncScheduler } from './AsyncScheduler';\n\nexport class QueueScheduler extends AsyncScheduler {\n}\n", "import { QueueAction } from './QueueAction';\nimport { QueueScheduler } from './QueueScheduler';\n\n/**\n *\n * Queue Scheduler\n *\n * Put every next task on a queue, instead of executing it immediately\n *\n * `queue` scheduler, when used with delay, behaves the same as {@link asyncScheduler} scheduler.\n *\n * When used without delay, it schedules given task synchronously - executes it right when\n * it is scheduled. However when called recursively, that is when inside the scheduled task,\n * another task is scheduled with queue scheduler, instead of executing immediately as well,\n * that task will be put on a queue and wait for current one to finish.\n *\n * This means that when you execute task with `queue` scheduler, you are sure it will end\n * before any other task scheduled with that scheduler will start.\n *\n * ## Examples\n * Schedule recursively first, then do something\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(() => {\n * queueScheduler.schedule(() => console.log('second')); // will not happen now, but will be put on a queue\n *\n * console.log('first');\n * });\n *\n * // Logs:\n * // \"first\"\n * // \"second\"\n * ```\n *\n * Reschedule itself recursively\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(function(state) {\n * if (state !== 0) {\n * console.log('before', state);\n * this.schedule(state - 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * console.log('after', state);\n * }\n * }, 0, 3);\n *\n * // In scheduler that runs recursively, you would expect:\n * // \"before\", 3\n * // \"before\", 2\n * // \"before\", 1\n * // \"after\", 1\n * // \"after\", 2\n * // \"after\", 3\n *\n * // But with queue it logs:\n * // \"before\", 3\n * // \"after\", 3\n * // \"before\", 2\n * // \"after\", 2\n * // \"before\", 1\n * // \"after\", 1\n * ```\n */\n\nexport const queueScheduler = new QueueScheduler(QueueAction);\n\n/**\n * @deprecated Renamed to {@link queueScheduler}. Will be removed in v8.\n */\nexport const queue = queueScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && id === scheduler._scheduled && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n let flushId;\n if (action) {\n flushId = action.id;\n } else {\n flushId = this._scheduled;\n this._scheduled = undefined;\n }\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an + + + + + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Excessive Capabilities

+

Issue Description

+

Excessive Linux capabilities are detected beyond the necessary NET_ADMIN, NET_BIND_SERVICE, and NET_RAW. This may indicate overly permissive container configuration.

+

Security Ramifications

+

While the detected capabilities might not directly harm operation, running with more privileges than necessary increases the attack surface. If the container is compromised, additional capabilities could allow broader system access or privilege escalation.

+

Why You're Seeing This Issue

+

This occurs when your Docker configuration grants more capabilities than required for network monitoring. The application only needs specific network-related capabilities for proper function.

+

How to Correct the Issue

+

Limit capabilities to only those required:

+
    +
  • In docker-compose.yml, specify only needed caps: + ```yaml + cap_add:
      +
    • NET_RAW
    • +
    • NET_ADMIN
    • +
    • NET_BIND_SERVICE + ```
    • +
    +
  • +
  • Remove any unnecessary --cap-add or --privileged flags from docker run commands
  • +
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/file-permissions/index.html b/docker-troubleshooting/file-permissions/index.html new file mode 100644 index 00000000..a8e586ab --- /dev/null +++ b/docker-troubleshooting/file-permissions/index.html @@ -0,0 +1,4019 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + File Permission Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

File Permission Issues

+

Issue Description

+

NetAlertX cannot read from or write to critical configuration and database files. This prevents the application from saving data, logs, or configuration changes.

+

Security Ramifications

+

Incorrect file permissions can expose sensitive configuration data or database contents to unauthorized access. Network monitoring tools handle sensitive information about devices on your network, and improper permissions could lead to information disclosure.

+

Why You're Seeing This Issue

+

This occurs when the mounted volumes for configuration and database files don't have proper ownership or permissions set for the netalertx user (UID 20211). The container expects these files to be accessible by the service account, not root or other users.

+

How to Correct the Issue

+

Fix permissions on the host system for the mounted directories:

+
    +
  • Ensure the config and database directories are owned by the netalertx user: chown -R 20211:20211 /path/to/config /path/to/db
  • +
  • Set appropriate permissions: chmod -R 755 /path/to/config /path/to/db for directories, chmod 644 for files
  • +
  • Alternatively, restart the container with root privileges temporarily to allow automatic permission fixing, then switch back to the default user
  • +
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/incorrect-user/index.html b/docker-troubleshooting/incorrect-user/index.html new file mode 100644 index 00000000..9489f7f3 --- /dev/null +++ b/docker-troubleshooting/incorrect-user/index.html @@ -0,0 +1,4020 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Incorrect Container User - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Incorrect Container User

+

Issue Description

+

NetAlertX is running as UID:GID other than the expected 20211:20211. This bypasses hardened permissions, file ownership, and runtime isolation safeguards.

+

Security Ramifications

+

The application is designed with security hardening that depends on running under a dedicated, non-privileged service account. Using a different user account can silently fail future upgrades and removes crucial isolation between the container and host system.

+

Why You're Seeing This Issue

+

This occurs when you override the container's default user with custom user: directives in docker-compose.yml or --user flags in docker run commands. The container expects to run as the netalertx user for proper security isolation.

+

How to Correct the Issue

+

Restore the container to the default user:

+
    +
  • Remove any user: overrides from docker-compose.yml
  • +
  • Avoid --user flags in docker run commands
  • +
  • Allow the container to run with its default UID:GID 20211:20211
  • +
  • Recreate the container so volume ownership is reset automatically
  • +
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/missing-capabilities/index.html b/docker-troubleshooting/missing-capabilities/index.html new file mode 100644 index 00000000..8b2cecbf --- /dev/null +++ b/docker-troubleshooting/missing-capabilities/index.html @@ -0,0 +1,4026 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Missing Network Capabilities - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Missing Network Capabilities

+

Issue Description

+

Raw network capabilities (NET_RAW, NET_ADMIN, NET_BIND_SERVICE) are missing. Tools that rely on these capabilities (e.g., nmap -sS, arp-scan, nbtscan) will not function.

+

Security Ramifications

+

Network scanning and monitoring requires low-level network access that these capabilities provide. Without them, the application cannot perform essential functions like ARP scanning, port scanning, or passive network discovery, severely limiting its effectiveness.

+

Why You're Seeing This Issue

+

This occurs when the container doesn't have the necessary Linux capabilities granted. Docker containers run with limited capabilities by default, and network monitoring tools need elevated network privileges.

+

How to Correct the Issue

+

Add the required capabilities to your container:

+
    +
  • In docker-compose.yml: + ```yaml + cap_add:
      +
    • NET_RAW
    • +
    • NET_ADMIN
    • +
    • NET_BIND_SERVICE + ```
    • +
    +
  • +
  • For docker run: --cap-add=NET_RAW --cap-add=NET_ADMIN --cap-add=NET_BIND_SERVICE
  • +
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/mount-configuration-issues/index.html b/docker-troubleshooting/mount-configuration-issues/index.html new file mode 100644 index 00000000..63d66a24 --- /dev/null +++ b/docker-troubleshooting/mount-configuration-issues/index.html @@ -0,0 +1,4026 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Mount Configuration Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Mount Configuration Issues

+

Issue Description

+

NetAlertX has detected configuration issues with your Docker volume mounts. These may include write permission problems, data loss risks, or performance concerns marked with ❌ in the table.

+

Security Ramifications

+

Improper mount configurations can lead to data loss, performance degradation, or security vulnerabilities. For persistent data (database and configuration), using non-persistent storage like tmpfs can result in complete data loss on container restart. For temporary data, using persistent storage may unnecessarily expose sensitive logs or cache data.

+

Why You're Seeing This Issue

+

This occurs when your Docker Compose or run configuration doesn't properly map host directories to container paths, or when the mounted volumes have incorrect permissions. The application requires specific paths to be writable for operation, and some paths should use persistent storage while others should be temporary.

+

How to Correct the Issue

+

Review and correct your volume mounts in docker-compose.yml:

+
    +
  • Ensure ${NETALERTX_DB} and ${NETALERTX_CONFIG} use persistent host directories
  • +
  • Ensure ${NETALERTX_API}, ${NETALERTX_LOG} have appropriate permissions
  • +
  • Avoid mounting sensitive paths to non-persistent filesystems like tmpfs for critical data
  • +
  • Use bind mounts with proper ownership (netalertx user: 20211:20211)
  • +
+

Example volume configuration:

+
volumes:
+  - ./data/db:/data/db
+  - ./data/config:/data/config
+  - ./data/log:/tmp/log
+
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/network-mode/index.html b/docker-troubleshooting/network-mode/index.html new file mode 100644 index 00000000..0de0a495 --- /dev/null +++ b/docker-troubleshooting/network-mode/index.html @@ -0,0 +1,4019 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Network Mode Configuration - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Network Mode Configuration

+

Issue Description

+

NetAlertX is not running with --network=host. Bridge networking blocks passive discovery (ARP, NBNS, mDNS) and active scanning accuracy.

+

Security Ramifications

+

Host networking is required for comprehensive network monitoring. Bridge mode isolates the container from raw network access needed for ARP scanning, passive discovery protocols, and accurate device detection. Without host networking, the application cannot fully monitor your network.

+

Why You're Seeing This Issue

+

This occurs when your Docker configuration uses bridge networking instead of host networking. Network monitoring requires direct access to the host's network interfaces to perform passive discovery and active scanning.

+

How to Correct the Issue

+

Enable host networking mode:

+
    +
  • In docker-compose.yml, add: network_mode: host
  • +
  • For docker run, use: --network=host
  • +
  • Ensure the container has required capabilities: --cap-add=NET_RAW --cap-add=NET_ADMIN --cap-add=NET_BIND_SERVICE
  • +
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/nginx-configuration-mount/index.html b/docker-troubleshooting/nginx-configuration-mount/index.html new file mode 100644 index 00000000..3c92b703 --- /dev/null +++ b/docker-troubleshooting/nginx-configuration-mount/index.html @@ -0,0 +1,4029 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Nginx Configuration Mount Issues - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Nginx Configuration Mount Issues

+

Issue Description

+

You've configured a custom port for NetAlertX, but the required nginx configuration mount is missing or not writable. Without this mount, the container cannot apply your port changes and will fall back to the default port 20211.

+

Security Ramifications

+

Running in read-only mode (as recommended) prevents the container from modifying its own nginx configuration. Without a writable mount, custom port configurations cannot be applied, potentially exposing the service on unintended ports or requiring fallback to defaults.

+

Why You're Seeing This Issue

+

This occurs when you set a custom PORT environment variable (other than 20211) but haven't provided a writable mount for nginx configuration. The container needs to write custom nginx config files when running in read-only mode.

+

How to Correct the Issue

+

If you want to use a custom port, create a bind mount for the nginx configuration:

+
    +
  • Create a directory on your host: mkdir -p /path/to/nginx-config
  • +
  • Add to your docker-compose.yml: + ```yaml + volumes:
      +
    • /path/to/nginx-config:/tmp/nginx/active-config + environment:
    • +
    • PORT=your_custom_port + ```
    • +
    +
  • +
  • Ensure it's owned by the netalertx user: chown -R 20211:20211 /path/to/nginx-config
  • +
  • Set permissions: chmod -R 700 /path/to/nginx-config
  • +
+

If you don't need a custom port, simply omit the PORT environment variable and the container will use 20211 by default.

+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/port-conflicts/index.html b/docker-troubleshooting/port-conflicts/index.html new file mode 100644 index 00000000..bd6a7263 --- /dev/null +++ b/docker-troubleshooting/port-conflicts/index.html @@ -0,0 +1,4110 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Port Conflicts - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Port Conflicts

+

Issue Description

+

The configured application port (default 20211) or GraphQL API port (default 20212) is already in use by another service. This commonly occurs when you already have another NetAlertX instance running.

+

Security Ramifications

+

Port conflicts prevent the application from starting properly, leaving network monitoring services unavailable. Running multiple instances on the same ports can also create configuration confusion and potential security issues if services are inadvertently exposed.

+

Why You're Seeing This Issue

+

This error typically occurs when:

+
    +
  • You already have NetAlertX running - Another Docker container or devcontainer instance is using the default ports 20211 and 20212
  • +
  • Port conflicts with other services - Other applications on your system are using these ports
  • +
  • Configuration error - Both PORT and GRAPHQL_PORT environment variables are set to the same value
  • +
+

How to Correct the Issue

+

Check for Existing NetAlertX Instances

+

First, check if you already have NetAlertX running:

+
# Check for running NetAlertX containers
+docker ps | grep netalertx
+
+# Check for devcontainer processes
+ps aux | grep netalertx
+
+# Check what services are using the ports
+netstat -tlnp | grep :20211
+netstat -tlnp | grep :20212
+
+

Stop Conflicting Instances

+

If you find another NetAlertX instance:

+
# Stop specific container
+docker stop <container_name>
+
+# Stop all NetAlertX containers
+docker stop $(docker ps -q --filter ancestor=jokob-sk/netalertx)
+
+# Stop devcontainer services
+# Use VS Code command palette: "Dev Containers: Rebuild Container"
+
+

Configure Different Ports

+

If you need multiple instances, configure unique ports:

+
environment:
+  - PORT=20211          # Main application port
+  - GRAPHQL_PORT=20212  # GraphQL API port
+
+

For a second instance, use different ports:

+
environment:
+  - PORT=20213          # Different main port
+  - GRAPHQL_PORT=20214  # Different API port
+
+

Alternative: Use Different Container Names

+

When running multiple instances, use unique container names:

+
services:
+  netalertx-primary:
+    # ... existing config
+  netalertx-secondary:
+    # ... config with different ports
+
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/read-only-filesystem/index.html b/docker-troubleshooting/read-only-filesystem/index.html new file mode 100644 index 00000000..f64bde11 --- /dev/null +++ b/docker-troubleshooting/read-only-filesystem/index.html @@ -0,0 +1,4019 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Read-Only Filesystem Mode - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Read-Only Filesystem Mode

+

Issue Description

+

The container is running as read-write instead of read-only mode. This reduces the security hardening of the appliance.

+

Security Ramifications

+

Read-only root filesystem is a security best practice that prevents malicious modifications to the container's filesystem. Running read-write allows potential attackers to modify system files or persist malware within the container.

+

Why You're Seeing This Issue

+

This occurs when the Docker configuration doesn't mount the root filesystem as read-only. The application is designed as a security appliance that should prevent filesystem modifications.

+

How to Correct the Issue

+

Enable read-only mode:

+
    +
  • In docker-compose.yml, add: read_only: true
  • +
  • For docker run, use: --read-only
  • +
  • Ensure necessary directories are mounted as writable volumes (tmp, logs, etc.)
  • +
+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/running-as-root/index.html b/docker-troubleshooting/running-as-root/index.html new file mode 100644 index 00000000..1de0e87b --- /dev/null +++ b/docker-troubleshooting/running-as-root/index.html @@ -0,0 +1,4020 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Running as Root User - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Running as Root User

+

Issue Description

+

NetAlertX has detected that the container is running with root privileges (UID 0). This configuration bypasses all built-in security hardening measures designed to protect your system.

+

Security Ramifications

+

Running security-critical applications like network monitoring tools as root grants unrestricted access to your host system. A successful compromise here could jeopardize your entire infrastructure, including other containers, host services, and potentially your network.

+

Why You're Seeing This Issue

+

This typically occurs when you've explicitly overridden the container's default user in your Docker configuration, such as using user: root or --user 0:0 in docker-compose.yml or docker run commands. The application is designed to run under a dedicated, non-privileged service account for security.

+

How to Correct the Issue

+

Switch to the dedicated 'netalertx' user by removing any custom user directives:

+
    +
  • Remove user: entries from your docker-compose.yml
  • +
  • Avoid --user flags in docker run commands
  • +
  • Ensure the container runs with the default UID 20211:20211
  • +
+

After making these changes, restart the container. The application will automatically adjust ownership of required directories.

+

Additional Resources

+

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

+

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-troubleshooting/troubleshooting/index.html b/docker-troubleshooting/troubleshooting/index.html new file mode 100644 index 00000000..70086f0c --- /dev/null +++ b/docker-troubleshooting/troubleshooting/index.html @@ -0,0 +1,3935 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Troubleshooting - NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

Troubleshooting

+ + + + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/img/@eaDir/device_details.png@SynoEAStream b/img/@eaDir/device_details.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/device_details.png@SynoEAStream differ diff --git a/img/@eaDir/devices_dark.png@SynoEAStream b/img/@eaDir/devices_dark.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/devices_dark.png@SynoEAStream differ diff --git a/img/@eaDir/devices_light.png@SynoEAStream b/img/@eaDir/devices_light.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/devices_light.png@SynoEAStream differ diff --git a/img/@eaDir/devices_split.png@SynoEAStream b/img/@eaDir/devices_split.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/devices_split.png@SynoEAStream differ diff --git a/img/@eaDir/events.png@SynoEAStream b/img/@eaDir/events.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/events.png@SynoEAStream differ diff --git a/img/@eaDir/help_faq.png@SynoEAStream b/img/@eaDir/help_faq.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/help_faq.png@SynoEAStream differ diff --git a/img/@eaDir/maintenance.png@SynoEAStream b/img/@eaDir/maintenance.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/maintenance.png@SynoEAStream differ diff --git a/img/@eaDir/network.png@SynoEAStream b/img/@eaDir/network.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/network.png@SynoEAStream differ diff --git a/img/@eaDir/presence.png@SynoEAStream b/img/@eaDir/presence.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/presence.png@SynoEAStream differ diff --git a/img/@eaDir/settings.png@SynoEAStream b/img/@eaDir/settings.png@SynoEAStream new file mode 100644 index 00000000..28532238 Binary files /dev/null and b/img/@eaDir/settings.png@SynoEAStream differ diff --git a/img/BACKUPS/Maintenance_Backup_Restore.png b/img/BACKUPS/Maintenance_Backup_Restore.png new file mode 100644 index 00000000..fbde4e47 Binary files /dev/null and b/img/BACKUPS/Maintenance_Backup_Restore.png differ diff --git a/img/BUILDS/build_images_options_tradeoffs.png b/img/BUILDS/build_images_options_tradeoffs.png new file mode 100644 index 00000000..43403c07 Binary files /dev/null and b/img/BUILDS/build_images_options_tradeoffs.png differ diff --git a/img/CUSTOM_PROPERTIES/Device_Custom_Properties.png b/img/CUSTOM_PROPERTIES/Device_Custom_Properties.png new file mode 100644 index 00000000..2d1b95c0 Binary files /dev/null and b/img/CUSTOM_PROPERTIES/Device_Custom_Properties.png differ diff --git a/img/CUSTOM_PROPERTIES/Device_Custom_Properties_vid.gif b/img/CUSTOM_PROPERTIES/Device_Custom_Properties_vid.gif new file mode 100644 index 00000000..b21eeafa Binary files /dev/null and b/img/CUSTOM_PROPERTIES/Device_Custom_Properties_vid.gif differ diff --git a/img/DATABASE/CurrentScan.png b/img/DATABASE/CurrentScan.png new file mode 100644 index 00000000..18411ac6 Binary files /dev/null and b/img/DATABASE/CurrentScan.png differ diff --git a/img/DATABASE/DHCP_Leases.png b/img/DATABASE/DHCP_Leases.png new file mode 100644 index 00000000..6426cce1 Binary files /dev/null and b/img/DATABASE/DHCP_Leases.png differ diff --git a/img/DATABASE/Devices.png b/img/DATABASE/Devices.png new file mode 100644 index 00000000..33e5bef0 Binary files /dev/null and b/img/DATABASE/Devices.png differ diff --git a/img/DATABASE/Events.png b/img/DATABASE/Events.png new file mode 100644 index 00000000..bc858f9d Binary files /dev/null and b/img/DATABASE/Events.png differ diff --git a/img/DATABASE/Nmap_Scan.png b/img/DATABASE/Nmap_Scan.png new file mode 100644 index 00000000..eb2fe490 Binary files /dev/null and b/img/DATABASE/Nmap_Scan.png differ diff --git a/img/DATABASE/Online_History.png b/img/DATABASE/Online_History.png new file mode 100644 index 00000000..1e7b498c Binary files /dev/null and b/img/DATABASE/Online_History.png differ diff --git a/img/DATABASE/Parameters.png b/img/DATABASE/Parameters.png new file mode 100644 index 00000000..a7eb8275 Binary files /dev/null and b/img/DATABASE/Parameters.png differ diff --git a/img/DATABASE/Pholus_Scan.png b/img/DATABASE/Pholus_Scan.png new file mode 100644 index 00000000..953a5a2a Binary files /dev/null and b/img/DATABASE/Pholus_Scan.png differ diff --git a/img/DATABASE/PiHole_Network.png b/img/DATABASE/PiHole_Network.png new file mode 100644 index 00000000..3a8ed0d9 Binary files /dev/null and b/img/DATABASE/PiHole_Network.png differ diff --git a/img/DATABASE/Plugins_Events.png b/img/DATABASE/Plugins_Events.png new file mode 100644 index 00000000..15734054 Binary files /dev/null and b/img/DATABASE/Plugins_Events.png differ diff --git a/img/DATABASE/Plugins_History.png b/img/DATABASE/Plugins_History.png new file mode 100644 index 00000000..376be88e Binary files /dev/null and b/img/DATABASE/Plugins_History.png differ diff --git a/img/DATABASE/Plugins_Language_Strings.png b/img/DATABASE/Plugins_Language_Strings.png new file mode 100644 index 00000000..664a16dd Binary files /dev/null and b/img/DATABASE/Plugins_Language_Strings.png differ diff --git a/img/DATABASE/Plugins_Objects.png b/img/DATABASE/Plugins_Objects.png new file mode 100644 index 00000000..064106bc Binary files /dev/null and b/img/DATABASE/Plugins_Objects.png differ diff --git a/img/DATABASE/ScanCycles.png b/img/DATABASE/ScanCycles.png new file mode 100644 index 00000000..d3eaffc4 Binary files /dev/null and b/img/DATABASE/ScanCycles.png differ diff --git a/img/DATABASE/Sessions.png b/img/DATABASE/Sessions.png new file mode 100644 index 00000000..311eab62 Binary files /dev/null and b/img/DATABASE/Sessions.png differ diff --git a/img/DATABASE/Settings.png b/img/DATABASE/Settings.png new file mode 100644 index 00000000..adc7a3bb Binary files /dev/null and b/img/DATABASE/Settings.png differ diff --git a/img/DEBUG/Invalid_JSON_repsonse_debug.png b/img/DEBUG/Invalid_JSON_repsonse_debug.png new file mode 100644 index 00000000..7828c681 Binary files /dev/null and b/img/DEBUG/Invalid_JSON_repsonse_debug.png differ diff --git a/img/DEBUG/JSON_result_example.png b/img/DEBUG/JSON_result_example.png new file mode 100644 index 00000000..6d667622 Binary files /dev/null and b/img/DEBUG/JSON_result_example.png differ diff --git a/img/DEBUG/array_result_example.png b/img/DEBUG/array_result_example.png new file mode 100644 index 00000000..be969623 Binary files /dev/null and b/img/DEBUG/array_result_example.png differ diff --git a/img/DEBUG/maintenance_debug_php.png b/img/DEBUG/maintenance_debug_php.png new file mode 100644 index 00000000..3b59d56a Binary files /dev/null and b/img/DEBUG/maintenance_debug_php.png differ diff --git a/img/DEBUG_API_SERVER/Init_check.png b/img/DEBUG_API_SERVER/Init_check.png new file mode 100644 index 00000000..92d9d562 Binary files /dev/null and b/img/DEBUG_API_SERVER/Init_check.png differ diff --git a/img/DEBUG_API_SERVER/app_conf_graphql_port.png b/img/DEBUG_API_SERVER/app_conf_graphql_port.png new file mode 100644 index 00000000..8dff5e2e Binary files /dev/null and b/img/DEBUG_API_SERVER/app_conf_graphql_port.png differ diff --git a/img/DEBUG_API_SERVER/dev_console_graphql_json.png b/img/DEBUG_API_SERVER/dev_console_graphql_json.png new file mode 100644 index 00000000..c26b5be6 Binary files /dev/null and b/img/DEBUG_API_SERVER/dev_console_graphql_json.png differ diff --git a/img/DEBUG_API_SERVER/graphql_running_logs.png b/img/DEBUG_API_SERVER/graphql_running_logs.png new file mode 100644 index 00000000..45a6125a Binary files /dev/null and b/img/DEBUG_API_SERVER/graphql_running_logs.png differ diff --git a/img/DEBUG_API_SERVER/graphql_settings_port_token.png b/img/DEBUG_API_SERVER/graphql_settings_port_token.png new file mode 100644 index 00000000..e9253304 Binary files /dev/null and b/img/DEBUG_API_SERVER/graphql_settings_port_token.png differ diff --git a/img/DEBUG_API_SERVER/network_graphql.png b/img/DEBUG_API_SERVER/network_graphql.png new file mode 100644 index 00000000..71a96538 Binary files /dev/null and b/img/DEBUG_API_SERVER/network_graphql.png differ diff --git a/img/DEBUG_PLUGINS/plugin_objects_pihole.png b/img/DEBUG_PLUGINS/plugin_objects_pihole.png new file mode 100644 index 00000000..205ef500 Binary files /dev/null and b/img/DEBUG_PLUGINS/plugin_objects_pihole.png differ diff --git a/img/DEV/Maintenance_Logs_Restart_server.png b/img/DEV/Maintenance_Logs_Restart_server.png new file mode 100644 index 00000000..f58e2f5d Binary files /dev/null and b/img/DEV/Maintenance_Logs_Restart_server.png differ diff --git a/img/DEV/devcontainer_1.png b/img/DEV/devcontainer_1.png new file mode 100644 index 00000000..e8d077e4 Binary files /dev/null and b/img/DEV/devcontainer_1.png differ diff --git a/img/DEV/devcontainer_2.png b/img/DEV/devcontainer_2.png new file mode 100644 index 00000000..811ec528 Binary files /dev/null and b/img/DEV/devcontainer_2.png differ diff --git a/img/DEV/devcontainer_3.png b/img/DEV/devcontainer_3.png new file mode 100644 index 00000000..4343cc8d Binary files /dev/null and b/img/DEV/devcontainer_3.png differ diff --git a/img/DEV/devcontainer_4.png b/img/DEV/devcontainer_4.png new file mode 100644 index 00000000..110cc945 Binary files /dev/null and b/img/DEV/devcontainer_4.png differ diff --git a/img/DEVICES_BULK_EDITING/CSV_BACKUP_SETTINGS.png b/img/DEVICES_BULK_EDITING/CSV_BACKUP_SETTINGS.png new file mode 100644 index 00000000..6dd634d9 Binary files /dev/null and b/img/DEVICES_BULK_EDITING/CSV_BACKUP_SETTINGS.png differ diff --git a/img/DEVICES_BULK_EDITING/MAINTENANCE_CSV_EXPORT.png b/img/DEVICES_BULK_EDITING/MAINTENANCE_CSV_EXPORT.png new file mode 100644 index 00000000..935c4ecc Binary files /dev/null and b/img/DEVICES_BULK_EDITING/MAINTENANCE_CSV_EXPORT.png differ diff --git a/img/DEVICES_BULK_EDITING/MULTI-EDIT.gif b/img/DEVICES_BULK_EDITING/MULTI-EDIT.gif new file mode 100644 index 00000000..70069f7d Binary files /dev/null and b/img/DEVICES_BULK_EDITING/MULTI-EDIT.gif differ diff --git a/img/DEVICES_BULK_EDITING/NOTEPAD++.png b/img/DEVICES_BULK_EDITING/NOTEPAD++.png new file mode 100644 index 00000000..18acb020 Binary files /dev/null and b/img/DEVICES_BULK_EDITING/NOTEPAD++.png differ diff --git a/img/DEVICE_MANAGEMENT/DeviceDetails_DisplaySettings.png b/img/DEVICE_MANAGEMENT/DeviceDetails_DisplaySettings.png new file mode 100644 index 00000000..554ef280 Binary files /dev/null and b/img/DEVICE_MANAGEMENT/DeviceDetails_DisplaySettings.png differ diff --git a/img/DEVICE_MANAGEMENT/DeviceEdit_SaveDummyDevice.png b/img/DEVICE_MANAGEMENT/DeviceEdit_SaveDummyDevice.png new file mode 100644 index 00000000..bdbda921 Binary files /dev/null and b/img/DEVICE_MANAGEMENT/DeviceEdit_SaveDummyDevice.png differ diff --git a/img/DEVICE_MANAGEMENT/DeviceManagement_MainInfo.png b/img/DEVICE_MANAGEMENT/DeviceManagement_MainInfo.png new file mode 100644 index 00000000..c285b4b6 Binary files /dev/null and b/img/DEVICE_MANAGEMENT/DeviceManagement_MainInfo.png differ diff --git a/img/DEVICE_MANAGEMENT/Devices_CreateDummyDevice.png b/img/DEVICE_MANAGEMENT/Devices_CreateDummyDevice.png new file mode 100644 index 00000000..316e79c8 Binary files /dev/null and b/img/DEVICE_MANAGEMENT/Devices_CreateDummyDevice.png differ diff --git a/img/DEVICE_MANAGEMENT/device_management_status_colors.png b/img/DEVICE_MANAGEMENT/device_management_status_colors.png new file mode 100644 index 00000000..0762fa89 Binary files /dev/null and b/img/DEVICE_MANAGEMENT/device_management_status_colors.png differ diff --git a/img/DOCKER/DOCKER_PORTAINER.png b/img/DOCKER/DOCKER_PORTAINER.png new file mode 100644 index 00000000..e29a0699 Binary files /dev/null and b/img/DOCKER/DOCKER_PORTAINER.png differ diff --git a/img/FIX_OFFLINE_DETECTION/presence_graph_before_after.png b/img/FIX_OFFLINE_DETECTION/presence_graph_before_after.png new file mode 100644 index 00000000..77fc3b95 Binary files /dev/null and b/img/FIX_OFFLINE_DETECTION/presence_graph_before_after.png differ diff --git a/img/Follow_Releases_and_Star.gif b/img/Follow_Releases_and_Star.gif new file mode 100644 index 00000000..da59c92d Binary files /dev/null and b/img/Follow_Releases_and_Star.gif differ diff --git a/img/GENERAL/github_social_image.jpg b/img/GENERAL/github_social_image.jpg new file mode 100644 index 00000000..631c8f1a Binary files /dev/null and b/img/GENERAL/github_social_image.jpg differ diff --git a/img/GENERAL/in-app-help.png b/img/GENERAL/in-app-help.png new file mode 100644 index 00000000..d6d95ba7 Binary files /dev/null and b/img/GENERAL/in-app-help.png differ diff --git a/img/HOME_ASISSTANT/HomeAssistant-Configuration.png b/img/HOME_ASISSTANT/HomeAssistant-Configuration.png new file mode 100644 index 00000000..2feb06a5 Binary files /dev/null and b/img/HOME_ASISSTANT/HomeAssistant-Configuration.png differ diff --git a/img/HOME_ASISSTANT/HomeAssistant-Device-Presence-History.png b/img/HOME_ASISSTANT/HomeAssistant-Device-Presence-History.png new file mode 100644 index 00000000..ef3b3dfa Binary files /dev/null and b/img/HOME_ASISSTANT/HomeAssistant-Device-Presence-History.png differ diff --git a/img/HOME_ASISSTANT/HomeAssistant-Device-as-Sensors.png b/img/HOME_ASISSTANT/HomeAssistant-Device-as-Sensors.png new file mode 100644 index 00000000..fa23f03f Binary files /dev/null and b/img/HOME_ASISSTANT/HomeAssistant-Device-as-Sensors.png differ diff --git a/img/HOME_ASISSTANT/HomeAssistant-Devices-List.png b/img/HOME_ASISSTANT/HomeAssistant-Devices-List.png new file mode 100644 index 00000000..e2f4b1d7 Binary files /dev/null and b/img/HOME_ASISSTANT/HomeAssistant-Devices-List.png differ diff --git a/img/HOME_ASISSTANT/HomeAssistant-Overview-Card.png b/img/HOME_ASISSTANT/HomeAssistant-Overview-Card.png new file mode 100644 index 00000000..9d90d04b Binary files /dev/null and b/img/HOME_ASISSTANT/HomeAssistant-Overview-Card.png differ diff --git a/img/ICONS/device-icon.png b/img/ICONS/device-icon.png new file mode 100644 index 00000000..2a85ebe7 Binary files /dev/null and b/img/ICONS/device-icon.png differ diff --git a/img/ICONS/device_add_icon.png b/img/ICONS/device_add_icon.png new file mode 100644 index 00000000..b821f5c8 Binary files /dev/null and b/img/ICONS/device_add_icon.png differ diff --git a/img/ICONS/device_icons_preview.gif b/img/ICONS/device_icons_preview.gif new file mode 100644 index 00000000..01929ff3 Binary files /dev/null and b/img/ICONS/device_icons_preview.gif differ diff --git a/img/ICONS/devices-icons.png b/img/ICONS/devices-icons.png new file mode 100644 index 00000000..bc0a3a0b Binary files /dev/null and b/img/ICONS/devices-icons.png differ diff --git a/img/ICONS/font_awesome_copy_html.png b/img/ICONS/font_awesome_copy_html.png new file mode 100644 index 00000000..843e95c7 Binary files /dev/null and b/img/ICONS/font_awesome_copy_html.png differ diff --git a/img/ICONS/iconify_design_copy_svg.png b/img/ICONS/iconify_design_copy_svg.png new file mode 100644 index 00000000..a542ea4f Binary files /dev/null and b/img/ICONS/iconify_design_copy_svg.png differ diff --git a/img/ICONS/paste-svg.png b/img/ICONS/paste-svg.png new file mode 100644 index 00000000..1c732529 Binary files /dev/null and b/img/ICONS/paste-svg.png differ diff --git a/img/LOGGING/logging_integrations_plugins.png b/img/LOGGING/logging_integrations_plugins.png new file mode 100644 index 00000000..a2976cfa Binary files /dev/null and b/img/LOGGING/logging_integrations_plugins.png differ diff --git a/img/LOGGING/maintenance_logs.png b/img/LOGGING/maintenance_logs.png new file mode 100644 index 00000000..f474fce9 Binary files /dev/null and b/img/LOGGING/maintenance_logs.png differ diff --git a/img/NAME_RESOLUTION/name_res_nslookup_timeout.png b/img/NAME_RESOLUTION/name_res_nslookup_timeout.png new file mode 100644 index 00000000..e650aa5d Binary files /dev/null and b/img/NAME_RESOLUTION/name_res_nslookup_timeout.png differ diff --git a/img/NETWORK_TREE/Network_Assign.png b/img/NETWORK_TREE/Network_Assign.png new file mode 100644 index 00000000..c21f2810 Binary files /dev/null and b/img/NETWORK_TREE/Network_Assign.png differ diff --git a/img/NETWORK_TREE/Network_Assigned_Nodes.png b/img/NETWORK_TREE/Network_Assigned_Nodes.png new file mode 100644 index 00000000..5092a82e Binary files /dev/null and b/img/NETWORK_TREE/Network_Assigned_Nodes.png differ diff --git a/img/NETWORK_TREE/Network_Device_Details.png b/img/NETWORK_TREE/Network_Device_Details.png new file mode 100644 index 00000000..d009636f Binary files /dev/null and b/img/NETWORK_TREE/Network_Device_Details.png differ diff --git a/img/NETWORK_TREE/Network_Device_Details_Parent.png b/img/NETWORK_TREE/Network_Device_Details_Parent.png new file mode 100644 index 00000000..a44725e6 Binary files /dev/null and b/img/NETWORK_TREE/Network_Device_Details_Parent.png differ diff --git a/img/NETWORK_TREE/Network_Device_ParentDropdown.png b/img/NETWORK_TREE/Network_Device_ParentDropdown.png new file mode 100644 index 00000000..5481665c Binary files /dev/null and b/img/NETWORK_TREE/Network_Device_ParentDropdown.png differ diff --git a/img/NETWORK_TREE/Network_Device_type.png b/img/NETWORK_TREE/Network_Device_type.png new file mode 100644 index 00000000..dcd1a775 Binary files /dev/null and b/img/NETWORK_TREE/Network_Device_type.png differ diff --git a/img/NETWORK_TREE/Network_Sample.png b/img/NETWORK_TREE/Network_Sample.png new file mode 100644 index 00000000..69214478 Binary files /dev/null and b/img/NETWORK_TREE/Network_Sample.png differ diff --git a/img/NETWORK_TREE/Network_tree_details.png b/img/NETWORK_TREE/Network_tree_details.png new file mode 100644 index 00000000..add56481 Binary files /dev/null and b/img/NETWORK_TREE/Network_tree_details.png differ diff --git a/img/NETWORK_TREE/Network_tree_setup_hover.png b/img/NETWORK_TREE/Network_tree_setup_hover.png new file mode 100644 index 00000000..1a020a5d Binary files /dev/null and b/img/NETWORK_TREE/Network_tree_setup_hover.png differ diff --git a/img/NOTIFICATIONS/Device-notification-settings.png b/img/NOTIFICATIONS/Device-notification-settings.png new file mode 100644 index 00000000..6f32520d Binary files /dev/null and b/img/NOTIFICATIONS/Device-notification-settings.png differ diff --git a/img/NOTIFICATIONS/Global-notification-settings.png b/img/NOTIFICATIONS/Global-notification-settings.png new file mode 100644 index 00000000..ddc466a0 Binary files /dev/null and b/img/NOTIFICATIONS/Global-notification-settings.png differ diff --git a/img/NOTIFICATIONS/NEWDEV_ignores.png b/img/NOTIFICATIONS/NEWDEV_ignores.png new file mode 100644 index 00000000..0bdc5cb5 Binary files /dev/null and b/img/NOTIFICATIONS/NEWDEV_ignores.png differ diff --git a/img/NOTIFICATIONS/Plugin-notification-settings.png b/img/NOTIFICATIONS/Plugin-notification-settings.png new file mode 100644 index 00000000..3f29320c Binary files /dev/null and b/img/NOTIFICATIONS/Plugin-notification-settings.png differ diff --git a/img/NOTIFICATIONS/Schedules_out-of-sync.png b/img/NOTIFICATIONS/Schedules_out-of-sync.png new file mode 100644 index 00000000..6d465726 Binary files /dev/null and b/img/NOTIFICATIONS/Schedules_out-of-sync.png differ diff --git a/img/NetAlertX_logo.png b/img/NetAlertX_logo.png new file mode 100644 index 00000000..f2b8e23b Binary files /dev/null and b/img/NetAlertX_logo.png differ diff --git a/img/NetAlertX_logo_b_w_info.png b/img/NetAlertX_logo_b_w_info.png new file mode 100644 index 00000000..42eff54b Binary files /dev/null and b/img/NetAlertX_logo_b_w_info.png differ diff --git a/img/PERFORMANCE/db_size_check.png b/img/PERFORMANCE/db_size_check.png new file mode 100644 index 00000000..e3886e5e Binary files /dev/null and b/img/PERFORMANCE/db_size_check.png differ diff --git a/img/PIHOLE_GUIDE/DHCPLSS_pihole_settings.png b/img/PIHOLE_GUIDE/DHCPLSS_pihole_settings.png new file mode 100644 index 00000000..6a411aa8 Binary files /dev/null and b/img/PIHOLE_GUIDE/DHCPLSS_pihole_settings.png differ diff --git a/img/PIHOLE_GUIDE/PIHOLEAPI_settings.png b/img/PIHOLE_GUIDE/PIHOLEAPI_settings.png new file mode 100644 index 00000000..c0988167 Binary files /dev/null and b/img/PIHOLE_GUIDE/PIHOLEAPI_settings.png differ diff --git a/img/PIHOLE_GUIDE/PIHOLE_settings.png b/img/PIHOLE_GUIDE/PIHOLE_settings.png new file mode 100644 index 00000000..b1398460 Binary files /dev/null and b/img/PIHOLE_GUIDE/PIHOLE_settings.png differ diff --git a/img/PLUGINS/enable_plugin.gif b/img/PLUGINS/enable_plugin.gif new file mode 100644 index 00000000..4c439927 Binary files /dev/null and b/img/PLUGINS/enable_plugin.gif differ diff --git a/img/PLUGINS/loaded_plugins_setting.png b/img/PLUGINS/loaded_plugins_setting.png new file mode 100644 index 00000000..6b3fd998 Binary files /dev/null and b/img/PLUGINS/loaded_plugins_setting.png differ diff --git a/img/RANDOM_MAC/android_random_mac.jpg b/img/RANDOM_MAC/android_random_mac.jpg new file mode 100644 index 00000000..67929a17 Binary files /dev/null and b/img/RANDOM_MAC/android_random_mac.jpg differ diff --git a/img/RANDOM_MAC/ios_random_mac.png b/img/RANDOM_MAC/ios_random_mac.png new file mode 100644 index 00000000..41e1a5e2 Binary files /dev/null and b/img/RANDOM_MAC/ios_random_mac.png differ diff --git a/img/RANDOM_MAC/windows_random_mac.png b/img/RANDOM_MAC/windows_random_mac.png new file mode 100644 index 00000000..8ef309bc Binary files /dev/null and b/img/RANDOM_MAC/windows_random_mac.png differ diff --git a/img/SESSION_INFO/DeviceDetails_SessionInfo.png b/img/SESSION_INFO/DeviceDetails_SessionInfo.png new file mode 100644 index 00000000..e675ecd6 Binary files /dev/null and b/img/SESSION_INFO/DeviceDetails_SessionInfo.png differ diff --git a/img/SESSION_INFO/Monitoring_Presence.png b/img/SESSION_INFO/Monitoring_Presence.png new file mode 100644 index 00000000..8cabd783 Binary files /dev/null and b/img/SESSION_INFO/Monitoring_Presence.png differ diff --git a/img/SUBNETS/subnets-setting-location.png b/img/SUBNETS/subnets-setting-location.png new file mode 100644 index 00000000..a311f4a8 Binary files /dev/null and b/img/SUBNETS/subnets-setting-location.png differ diff --git a/img/SUBNETS/subnets_vlan.png b/img/SUBNETS/subnets_vlan.png new file mode 100644 index 00000000..f7e5235b Binary files /dev/null and b/img/SUBNETS/subnets_vlan.png differ diff --git a/img/SUBNETS/system_info-network_hardware.png b/img/SUBNETS/system_info-network_hardware.png new file mode 100644 index 00000000..d7f8ad0b Binary files /dev/null and b/img/SUBNETS/system_info-network_hardware.png differ diff --git a/img/SYNOLOGY/01_Create_folder_structure.png b/img/SYNOLOGY/01_Create_folder_structure.png new file mode 100644 index 00000000..d84824f9 Binary files /dev/null and b/img/SYNOLOGY/01_Create_folder_structure.png differ diff --git a/img/SYNOLOGY/02_Create_folder_structure_db.png b/img/SYNOLOGY/02_Create_folder_structure_db.png new file mode 100644 index 00000000..8f4ef7e5 Binary files /dev/null and b/img/SYNOLOGY/02_Create_folder_structure_db.png differ diff --git a/img/SYNOLOGY/03_Create_folder_structure_db.png b/img/SYNOLOGY/03_Create_folder_structure_db.png new file mode 100644 index 00000000..3602c2f2 Binary files /dev/null and b/img/SYNOLOGY/03_Create_folder_structure_db.png differ diff --git a/img/SYNOLOGY/04_Create_folder_structure_config.png b/img/SYNOLOGY/04_Create_folder_structure_config.png new file mode 100644 index 00000000..07887fb8 Binary files /dev/null and b/img/SYNOLOGY/04_Create_folder_structure_config.png differ diff --git a/img/SYNOLOGY/05_Access_folder_properties.png b/img/SYNOLOGY/05_Access_folder_properties.png new file mode 100644 index 00000000..707c5c4e Binary files /dev/null and b/img/SYNOLOGY/05_Access_folder_properties.png differ diff --git a/img/SYNOLOGY/06_Note_location.png b/img/SYNOLOGY/06_Note_location.png new file mode 100644 index 00000000..5dab221b Binary files /dev/null and b/img/SYNOLOGY/06_Note_location.png differ diff --git a/img/SYNOLOGY/07_Create_project.png b/img/SYNOLOGY/07_Create_project.png new file mode 100644 index 00000000..a0956a96 Binary files /dev/null and b/img/SYNOLOGY/07_Create_project.png differ diff --git a/img/SYNOLOGY/08_Adjust_docker_compose_volumes.png b/img/SYNOLOGY/08_Adjust_docker_compose_volumes.png new file mode 100644 index 00000000..91063420 Binary files /dev/null and b/img/SYNOLOGY/08_Adjust_docker_compose_volumes.png differ diff --git a/img/SYNOLOGY/09_Run_and_build.png b/img/SYNOLOGY/09_Run_and_build.png new file mode 100644 index 00000000..59f3c15c Binary files /dev/null and b/img/SYNOLOGY/09_Run_and_build.png differ diff --git a/img/VERSIONS/latest-version-maintenance.png b/img/VERSIONS/latest-version-maintenance.png new file mode 100644 index 00000000..a5bcb131 Binary files /dev/null and b/img/VERSIONS/latest-version-maintenance.png differ diff --git a/img/VERSIONS/new-version-available-email.png b/img/VERSIONS/new-version-available-email.png new file mode 100644 index 00000000..e4b2efa9 Binary files /dev/null and b/img/VERSIONS/new-version-available-email.png differ diff --git a/img/VERSIONS/new-version-available-maintenance.png b/img/VERSIONS/new-version-available-maintenance.png new file mode 100644 index 00000000..f965587a Binary files /dev/null and b/img/VERSIONS/new-version-available-maintenance.png differ diff --git a/img/WEBHOOK_N8N/Webhook_settings.png b/img/WEBHOOK_N8N/Webhook_settings.png new file mode 100644 index 00000000..7934c224 Binary files /dev/null and b/img/WEBHOOK_N8N/Webhook_settings.png differ diff --git a/img/WEBHOOK_N8N/n8n_send_email_settings.png b/img/WEBHOOK_N8N/n8n_send_email_settings.png new file mode 100644 index 00000000..366ee5c2 Binary files /dev/null and b/img/WEBHOOK_N8N/n8n_send_email_settings.png differ diff --git a/img/WEBHOOK_N8N/n8n_webhook_settings.png b/img/WEBHOOK_N8N/n8n_webhook_settings.png new file mode 100644 index 00000000..5b685afc Binary files /dev/null and b/img/WEBHOOK_N8N/n8n_webhook_settings.png differ diff --git a/img/WEBHOOK_N8N/n8n_workflow.png b/img/WEBHOOK_N8N/n8n_workflow.png new file mode 100644 index 00000000..96ca9c0e Binary files /dev/null and b/img/WEBHOOK_N8N/n8n_workflow.png differ diff --git a/img/WEB_UI_PORT_DEBUG/container_port.png b/img/WEB_UI_PORT_DEBUG/container_port.png new file mode 100644 index 00000000..38403927 Binary files /dev/null and b/img/WEB_UI_PORT_DEBUG/container_port.png differ diff --git a/img/WORKFLOWS/actions.jpg b/img/WORKFLOWS/actions.jpg new file mode 100644 index 00000000..1f9c5362 Binary files /dev/null and b/img/WORKFLOWS/actions.jpg differ diff --git a/img/WORKFLOWS/conditions.png b/img/WORKFLOWS/conditions.png new file mode 100644 index 00000000..382d04b7 Binary files /dev/null and b/img/WORKFLOWS/conditions.png differ diff --git a/img/WORKFLOWS/trigger.jpg b/img/WORKFLOWS/trigger.jpg new file mode 100644 index 00000000..f04a196f Binary files /dev/null and b/img/WORKFLOWS/trigger.jpg differ diff --git a/img/WORKFLOWS/workflows.png b/img/WORKFLOWS/workflows.png new file mode 100644 index 00000000..ad86ac70 Binary files /dev/null and b/img/WORKFLOWS/workflows.png differ diff --git a/img/WORKFLOWS/workflows_app_events_search.png b/img/WORKFLOWS/workflows_app_events_search.png new file mode 100644 index 00000000..4e0300d0 Binary files /dev/null and b/img/WORKFLOWS/workflows_app_events_search.png differ diff --git a/img/WORKFLOWS/workflows_diagram.png b/img/WORKFLOWS/workflows_diagram.png new file mode 100644 index 00000000..016b1aba Binary files /dev/null and b/img/WORKFLOWS/workflows_diagram.png differ diff --git a/img/WORKFLOWS/workflows_logs_search.png b/img/WORKFLOWS/workflows_logs_search.png new file mode 100644 index 00000000..c19d431e Binary files /dev/null and b/img/WORKFLOWS/workflows_logs_search.png differ diff --git a/img/YouTube_thumbnail.png b/img/YouTube_thumbnail.png new file mode 100644 index 00000000..5afe92e5 Binary files /dev/null and b/img/YouTube_thumbnail.png differ diff --git a/img/device_details.png b/img/device_details.png new file mode 100644 index 00000000..6c4308fd Binary files /dev/null and b/img/device_details.png differ diff --git a/img/device_nmap.png b/img/device_nmap.png new file mode 100644 index 00000000..c1208edb Binary files /dev/null and b/img/device_nmap.png differ diff --git a/img/devices_dark.png b/img/devices_dark.png new file mode 100644 index 00000000..5ecbdfd4 Binary files /dev/null and b/img/devices_dark.png differ diff --git a/img/devices_light.png b/img/devices_light.png new file mode 100644 index 00000000..ee48506a Binary files /dev/null and b/img/devices_light.png differ diff --git a/img/devices_split.png b/img/devices_split.png new file mode 100644 index 00000000..4a27a1bb Binary files /dev/null and b/img/devices_split.png differ diff --git a/img/events.png b/img/events.png new file mode 100644 index 00000000..0c6a4d53 Binary files /dev/null and b/img/events.png differ diff --git a/img/help_faq.png b/img/help_faq.png new file mode 100644 index 00000000..67ade01d Binary files /dev/null and b/img/help_faq.png differ diff --git a/img/maintenance.png b/img/maintenance.png new file mode 100644 index 00000000..93958686 Binary files /dev/null and b/img/maintenance.png differ diff --git a/img/multi_edit.png b/img/multi_edit.png new file mode 100644 index 00000000..9ee8c2b0 Binary files /dev/null and b/img/multi_edit.png differ diff --git a/img/netalertx_docs.png b/img/netalertx_docs.png new file mode 100644 index 00000000..4250174f Binary files /dev/null and b/img/netalertx_docs.png differ diff --git a/img/netalertx_docs_old.png b/img/netalertx_docs_old.png new file mode 100644 index 00000000..516e5210 Binary files /dev/null and b/img/netalertx_docs_old.png differ diff --git a/img/network.png b/img/network.png new file mode 100644 index 00000000..b87d42f5 Binary files /dev/null and b/img/network.png differ diff --git a/img/network_setup.gif b/img/network_setup.gif new file mode 100644 index 00000000..a95faf48 Binary files /dev/null and b/img/network_setup.gif differ diff --git a/img/notification_center.png b/img/notification_center.png new file mode 100644 index 00000000..dbe421d9 Binary files /dev/null and b/img/notification_center.png differ diff --git a/img/plugins.png b/img/plugins.png new file mode 100644 index 00000000..c5514d3d Binary files /dev/null and b/img/plugins.png differ diff --git a/img/plugins_device_details.png b/img/plugins_device_details.png new file mode 100644 index 00000000..ba32078a Binary files /dev/null and b/img/plugins_device_details.png differ diff --git a/img/plugins_json_settings.png b/img/plugins_json_settings.png new file mode 100644 index 00000000..68aab490 Binary files /dev/null and b/img/plugins_json_settings.png differ diff --git a/img/plugins_json_ui.png b/img/plugins_json_ui.png new file mode 100644 index 00000000..c01636d0 Binary files /dev/null and b/img/plugins_json_ui.png differ diff --git a/img/plugins_settings.png b/img/plugins_settings.png new file mode 100644 index 00000000..1b67340d Binary files /dev/null and b/img/plugins_settings.png differ diff --git a/img/plugins_webmon.png b/img/plugins_webmon.png new file mode 100644 index 00000000..f5facab3 Binary files /dev/null and b/img/plugins_webmon.png differ diff --git a/img/presence.png b/img/presence.png new file mode 100644 index 00000000..dce12bbd Binary files /dev/null and b/img/presence.png differ diff --git a/img/report_sample.png b/img/report_sample.png new file mode 100644 index 00000000..7b688864 Binary files /dev/null and b/img/report_sample.png differ diff --git a/img/sent_reports_text.png b/img/sent_reports_text.png new file mode 100644 index 00000000..81c44651 Binary files /dev/null and b/img/sent_reports_text.png differ diff --git a/img/settings.png b/img/settings.png new file mode 100644 index 00000000..7feb6d34 Binary files /dev/null and b/img/settings.png differ diff --git a/img/showcase.gif b/img/showcase.gif new file mode 100644 index 00000000..79811286 Binary files /dev/null and b/img/showcase.gif differ diff --git a/img/size_h_1250_w_1000.txt b/img/size_h_1250_w_1000.txt new file mode 100644 index 00000000..a380c4d5 --- /dev/null +++ b/img/size_h_1250_w_1000.txt @@ -0,0 +1 @@ +Screenshot size: height: 1250px width: 1000px \ No newline at end of file diff --git a/img/sync_hub.png b/img/sync_hub.png new file mode 100644 index 00000000..dc452e2d Binary files /dev/null and b/img/sync_hub.png differ diff --git a/index.html b/index.html new file mode 100644 index 00000000..1e500371 --- /dev/null +++ b/index.html @@ -0,0 +1,4321 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + NetAlertX Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + + + + +
+ + +
+ + +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+ +
+ + + + + + + + + + + + + + + + + + + + + +

NetAlertX Documentation

+

Welcome to the official NetAlertX documentation! NetAlertX is a powerful tool designed to simplify the management and monitoring of your network. Below, you will find guides and resources to help you set up, configure, and troubleshoot your NetAlertX instance.

+

Preview

+

In-App Help

+

NetAlertX provides contextual help within the application:

+
    +
  • Hover over settings, fields, or labels to see additional tooltips and guidance.
  • +
  • Click ❔ (question-mark) icons next to various elements to view detailed information.
  • +
+
+

Installation Guides

+

The app can be installed different ways, with the best support of the docker-based deployments. This includes the Home Assistant and Unraid installation approaches. See details below.

+

Docker (Fully Supported)

+

NetAlertX is fully supported in Docker environments, allowing for easy setup and configuration. Follow the official guide to get started:

+ +

This guide will take you through the process of setting up NetAlertX using Docker Compose or standalone Docker commands.

+

Home Assistant (Fully Supported)

+

You can install NetAlertX also as a Home Assistant addon Home Assistant via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

+ +

Unraid (Partial Support)

+

The Unraid template was created by the community, so it's only partially supported. Alternatively, here is another version of the Unraid template.

+ +

Bare-Metal Installation (Experimental)

+

If you prefer to run NetAlertX on your own hardware, you can try the experimental bare-metal installation. Please note that this method is still under development, and are looking for maintainers to help improve it.

+ +
+

Help and Support

+

If you need help or run into issues, here are some resources to guide you:

+

Before opening an issue, please:

+ +

Need more help? Join the community discussions or submit a support request:

+ +
+

Contributing

+

NetAlertX is open-source and welcomes contributions from the community! If you'd like to help improve the software, please follow the guidelines below:

+
    +
  • Fork the repository and make your changes.
  • +
  • Submit a pull request with a detailed description of what you’ve changed and why.
  • +
+

For more information on contributing, check out our Dev Guide.

+
+

Stay Updated

+

To keep up with the latest changes and updates to NetAlertX, please refer to the following resources:

+ +

Make sure to follow the project on GitHub to get notifications for new releases and important updates.

+
+

Additional info

+ +

If you have any suggestions or improvements, please don’t hesitate to contribute!

+

NetAlertX is actively maintained. You can find the source code, report bugs, or request new features on our GitHub page.

+ + + + + + + + + + + + + +
+
+ + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/overrides/main.html b/overrides/main.html new file mode 100644 index 00000000..d2385ec1 --- /dev/null +++ b/overrides/main.html @@ -0,0 +1,28 @@ +{% extends "base.html" %} + +{% block analytics %} + + + + + + + + + {{ super() }} +{% endblock %} + +{% block header %} + + + + + {{ super() }} +{% endblock %} \ No newline at end of file diff --git a/samples/API/Grafana_Dashboard.json b/samples/API/Grafana_Dashboard.json new file mode 100644 index 00000000..5c35d9a0 --- /dev/null +++ b/samples/API/Grafana_Dashboard.json @@ -0,0 +1,1110 @@ +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": { + "type": "grafana", + "uid": "-- Grafana --" + }, + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "type": "dashboard" + } + ] + }, + "editable": true, + "fiscalYearStartMonth": 0, + "graphTooltip": 0, + "id": 7, + "links": [], + "panels": [ + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "#00a3cc", + "mode": "fixed" + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 0, + "y": 0 + }, + "id": 1, + "options": { + "colorMode": "background", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "percentChangeColorMode": "standard", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "expr": "netalertx_connected_devices + netalertx_offline_devices", + "refId": "A" + } + ], + "title": "Total Devices", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "#159d60", + "mode": "fixed" + }, + "decimals": 0, + "mappings": [], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 4, + "y": 0 + }, + "id": 2, + "options": { + "colorMode": "background", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "percentChangeColorMode": "standard", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_connected_devices", + "refId": "A" + } + ], + "title": "Connected", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "#b1720c", + "mode": "fixed" + }, + "decimals": 0, + "mappings": [], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 8, + "y": 0 + }, + "id": 3, + "options": { + "colorMode": "background", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "percentChangeColorMode": "standard", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_favorite_devices", + "refId": "A" + } + ], + "title": "Favorites", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "#F59E42", + "mode": "fixed" + }, + "decimals": 0, + "mappings": [], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 12, + "y": 0 + }, + "id": 4, + "options": { + "colorMode": "background", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "percentChangeColorMode": "standard", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_new_devices", + "refId": "A" + } + ], + "title": "New Devices", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "#EF4444", + "mode": "fixed" + }, + "decimals": 0, + "mappings": [], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 16, + "y": 0 + }, + "id": 5, + "options": { + "colorMode": "background", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "percentChangeColorMode": "standard", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_down_devices", + "refId": "A" + } + ], + "title": "Down", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "#6B7280", + "mode": "fixed" + }, + "decimals": 0, + "mappings": [], + "max": 100, + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [] + }, + "gridPos": { + "h": 3, + "w": 4, + "x": 20, + "y": 0 + }, + "id": 6, + "options": { + "colorMode": "background", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "percentChangeColorMode": "standard", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showPercentChange": false, + "textMode": "auto", + "wideLayout": true + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_archived_devices", + "refId": "A" + } + ], + "title": "Archived", + "type": "stat" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "fillOpacity": 80, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineWidth": 1, + "scaleDistribution": { + "type": "linear" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green" + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "Connected" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "#107648", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Offline" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "#888888", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Down Devices" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "#913225", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 9, + "w": 24, + "x": 0, + "y": 3 + }, + "id": 7, + "options": { + "barRadius": 0, + "barWidth": 0.9, + "displayMode": "stacked", + "fullHighlight": false, + "groupWidth": 0.7, + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "minInterval": "1m", + "orientation": "vertical", + "showValue": "never", + "stacking": "normal", + "text": { + "valueSize": 10 + }, + "tooltip": { + "hideZeros": false, + "mode": "single", + "sort": "none" + }, + "xTickLabelRotation": -45, + "xTickLabelSpacing": 0 + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_connected_devices", + "legendFormat": "Connected", + "refId": "A" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_total_devices - (netalertx_connected_devices + netalertx_down_devices)", + "legendFormat": "Offline", + "refId": "B" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "expr": "netalertx_down_devices", + "legendFormat": "Down Devices", + "refId": "C" + } + ], + "title": "Device Presence", + "type": "barchart" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "description": "Connected (Online) Devices", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "custom": { + "align": "auto", + "cellOptions": { + "type": "color-text" + }, + "filterable": true, + "inspect": false, + "minWidth": 120 + }, + "fieldMinMax": false, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "semi-dark-green" + } + ] + }, + "unit": "short" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "Type" + }, + "properties": [ + { + "id": "custom.width", + "value": 94 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Status" + }, + "properties": [ + { + "id": "custom.width", + "value": 70 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "IP" + }, + "properties": [ + { + "id": "custom.width", + "value": 100 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "MAC" + }, + "properties": [ + { + "id": "custom.width", + "value": 134 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Time" + }, + "properties": [ + { + "id": "custom.width", + "value": 188 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Device" + }, + "properties": [ + { + "id": "custom.width", + "value": 300 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Vendor" + }, + "properties": [ + { + "id": "custom.width", + "value": 300 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "First Connect" + }, + "properties": [ + { + "id": "custom.width", + "value": 221 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Last Connect" + }, + "properties": [ + { + "id": "custom.width", + "value": 200 + } + ] + } + ] + }, + "gridPos": { + "h": 10, + "w": 24, + "x": 0, + "y": 12 + }, + "id": 1001, + "options": { + "cellHeight": "sm", + "columns": [ + { + "text": "Time" + }, + { + "text": "device" + }, + { + "text": "ip" + }, + { + "text": "mac" + }, + { + "text": "vendor" + }, + { + "text": "dev_type" + }, + { + "text": "first_connection" + }, + { + "text": "last_connection" + } + ], + "footer": { + "countRows": false, + "enablePagination": true, + "fields": "", + "reducer": [], + "show": false + }, + "showHeader": true, + "sortBy": [] + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "editorMode": "code", + "expr": "netalertx_device_status{device_status=\"Online\"}", + "format": "table", + "range": true, + "refId": "A" + } + ], + "title": "Connected Devices", + "transformations": [ + { + "id": "organize", + "options": { + "excludeByName": { + "Value": true, + "__name__": true, + "device_status": false, + "instance": true, + "job": true + }, + "includeByName": {}, + "indexByName": { + "Time": 0, + "Value": 9, + "__name__": 12, + "dev_type": 5, + "device": 2, + "device_status": 6, + "first_connection": 7, + "instance": 11, + "ip": 1, + "job": 10, + "last_connection": 8, + "mac": 3, + "vendor": 4 + }, + "renameByName": { + "Time": "", + "dev_type": "Type", + "device": "Device", + "device_status": "Status", + "first_connection": "First Connect", + "ip": "IP", + "job": "", + "last_connection": "Last Connect", + "mac": "MAC", + "vendor": "Vendor" + } + } + }, + { + "id": "limit", + "options": { + "limitField": "100" + } + } + ], + "transparent": true, + "type": "table" + }, + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "description": "Disconnected(Offline) Devices", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "custom": { + "align": "auto", + "cellOptions": { + "type": "color-text" + }, + "filterable": true, + "inspect": false, + "minWidth": 120 + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ad2030" + } + ] + }, + "unit": "short" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "Type" + }, + "properties": [ + { + "id": "custom.width", + "value": 94 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Status" + }, + "properties": [ + { + "id": "custom.width", + "value": 70 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "IP" + }, + "properties": [ + { + "id": "custom.width", + "value": 100 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "MAC" + }, + "properties": [ + { + "id": "custom.width", + "value": 134 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Time" + }, + "properties": [ + { + "id": "custom.width", + "value": 188 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Device" + }, + "properties": [ + { + "id": "custom.width", + "value": 300 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Vendor" + }, + "properties": [ + { + "id": "custom.width", + "value": 300 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "First Connect" + }, + "properties": [ + { + "id": "custom.width", + "value": 221 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "Last Connect" + }, + "properties": [ + { + "id": "custom.width", + "value": 200 + } + ] + } + ] + }, + "gridPos": { + "h": 10, + "w": 24, + "x": 0, + "y": 22 + }, + "id": 1002, + "options": { + "cellHeight": "sm", + "columns": [ + { + "text": "Time" + }, + { + "text": "device" + }, + { + "text": "ip" + }, + { + "text": "mac" + }, + { + "text": "vendor" + }, + { + "text": "dev_type" + }, + { + "text": "first_connection" + }, + { + "text": "last_connection" + } + ], + "footer": { + "countRows": false, + "enablePagination": true, + "fields": "", + "reducer": [], + "show": false + }, + "showHeader": true, + "sortBy": [] + }, + "pluginVersion": "12.0.0", + "targets": [ + { + "datasource": { + "type": "prometheus", + "uid": "PBFA97CFB590B2093" + }, + "editorMode": "code", + "expr": "netalertx_device_status{device_status=\"Offline\"}", + "format": "table", + "range": true, + "refId": "A" + } + ], + "title": "Disconnected Devices", + "transformations": [ + { + "id": "organize", + "options": { + "excludeByName": { + "Value": true, + "__name__": true, + "device_status": false, + "instance": true, + "job": true + }, + "includeByName": {}, + "indexByName": { + "Time": 0, + "Value": 9, + "__name__": 12, + "dev_type": 5, + "device": 2, + "device_status": 6, + "first_connection": 7, + "instance": 11, + "ip": 1, + "job": 10, + "last_connection": 8, + "mac": 3, + "vendor": 4 + }, + "renameByName": { + "Time": "", + "dev_type": "Type", + "device": "Device", + "device_status": "Status", + "first_connection": "First Connect", + "ip": "IP", + "job": "", + "last_connection": "Last Connect", + "mac": "MAC", + "vendor": "Vendor" + } + } + }, + { + "id": "limit", + "options": { + "limitField": "100" + } + } + ], + "transparent": true, + "type": "table" + } + ], + "preload": false, + "refresh": "30s", + "schemaVersion": 41, + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-15m", + "to": "now" + }, + "timepicker": {}, + "timezone": "browser", + "title": "NetAlertX Overview", + "uid": "netalertx-overview_8_4_2025", + "version": 2 +} \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..dd58afba --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"NetAlertX Documentation","text":"

Welcome to the official NetAlertX documentation! NetAlertX is a powerful tool designed to simplify the management and monitoring of your network. Below, you will find guides and resources to help you set up, configure, and troubleshoot your NetAlertX instance.

"},{"location":"#in-app-help","title":"In-App Help","text":"

NetAlertX provides contextual help within the application:

  • Hover over settings, fields, or labels to see additional tooltips and guidance.
  • Click \u2754 (question-mark) icons next to various elements to view detailed information.
"},{"location":"#installation-guides","title":"Installation Guides","text":"

The app can be installed different ways, with the best support of the docker-based deployments. This includes the Home Assistant and Unraid installation approaches. See details below.

"},{"location":"#docker-fully-supported","title":"Docker (Fully Supported)","text":"

NetAlertX is fully supported in Docker environments, allowing for easy setup and configuration. Follow the official guide to get started:

  • Docker Installation Guide

This guide will take you through the process of setting up NetAlertX using Docker Compose or standalone Docker commands.

"},{"location":"#home-assistant-fully-supported","title":"Home Assistant (Fully Supported)","text":"

You can install NetAlertX also as a Home Assistant addon via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

  • [Installation] Home Assistant
"},{"location":"#unraid-partial-support","title":"Unraid (Partial Support)","text":"

The Unraid template was created by the community, so it's only partially supported. Alternatively, here is another version of the Unraid template.

  • [Installation] Unraid App
"},{"location":"#bare-metal-installation-experimental","title":"Bare-Metal Installation (Experimental)","text":"

If you prefer to run NetAlertX on your own hardware, you can try the experimental bare-metal installation. Please note that this method is still under development, and are looking for maintainers to help improve it.

  • Bare-Metal Installation Guide
"},{"location":"#help-and-support","title":"Help and Support","text":"

If you need help or run into issues, here are some resources to guide you:

Before opening an issue, please:

  • Check common issues to see if your problem has already been reported.
  • Look at closed issues for possible solutions to past problems.
  • Enable debugging to gather more information: Debug Guide.

Need more help? Join the community discussions or submit a support request:

  • Visit the GitHub Discussions for community support.
  • If you are experiencing issues that require immediate attention, consider opening an issue on our GitHub Issues page.
"},{"location":"#contributing","title":"Contributing","text":"

NetAlertX is open-source and welcomes contributions from the community! If you'd like to help improve the software, please follow the guidelines below:

  • Fork the repository and make your changes.
  • Submit a pull request with a detailed description of what you\u2019ve changed and why.

For more information on contributing, check out our Dev Guide.

"},{"location":"#stay-updated","title":"Stay Updated","text":"

To keep up with the latest changes and updates to NetAlertX, please refer to the following resources:

  • Releases

Make sure to follow the project on GitHub to get notifications for new releases and important updates.

"},{"location":"#additional-info","title":"Additional info","text":"
  • Documentation Index: Check out the full documentation index for all the guides available.

If you have any suggestions or improvements, please don\u2019t hesitate to contribute!

NetAlertX is actively maintained. You can find the source code, report bugs, or request new features on our GitHub page.

"},{"location":"API/","title":"API Documentation","text":"

This API provides programmatic access to devices, events, sessions, metrics, network tools, and sync in NetAlertX. It is implemented as a REST and GraphQL server. All requests require authentication via API Token (API_TOKEN setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a Authorization: Bearer API_TOKEN header as per example below:

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n

The API server runs on 0.0.0.0:<graphql_port> with CORS enabled for all main endpoints.

"},{"location":"API/#authentication","title":"Authentication","text":"

All endpoints require an API token provided in the HTTP headers:

Authorization: Bearer <API_TOKEN>\n

If the token is missing or invalid, the server will return:

{ \"error\": \"Forbidden\" }\n
"},{"location":"API/#base-url","title":"Base URL","text":"
http://<server>:<GRAPHQL_PORT>/\n
"},{"location":"API/#endpoints","title":"Endpoints","text":"

Tip

When retrieving devices or settings try using the GraphQL API endpoint first as it is read-optimized.

  • Device API Endpoints \u2013 Manage individual devices
  • Devices Collection \u2013 Bulk operations on multiple devices
  • Events \u2013 Device event logging and management
  • Sessions \u2013 Connection sessions and history
  • Settings \u2013 Settings
  • Messaging:
    • In app messaging - In-app messaging
  • Metrics \u2013 Prometheus metrics and per-device status
  • Network Tools \u2013 Utilities like Wake-on-LAN, traceroute, nslookup, nmap, and internet info
  • Online History \u2013 Online/offline device records
  • GraphQL \u2013 Advanced queries and filtering for Devices, Settings and Language Strings
  • Sync \u2013 Synchronization between multiple NetAlertX instances
  • Logs \u2013 Purging of logs and adding to the event execution queue for user triggered events
  • DB query (\u26a0 Internal) - Low level database access - use other endpoints if possible

See Testing for example requests and usage.

"},{"location":"API/#notes","title":"Notes","text":"
  • All endpoints enforce Bearer token authentication.
  • Errors return JSON with success: False and an error message.
  • GraphQL is available for advanced queries, while REST endpoints cover structured use cases.
  • Endpoints run on 0.0.0.0:<GRAPHQL_PORT> with CORS enabled.
  • Use consistent API tokens and node/plugin names when interacting with /sync to ensure data integrity.
"},{"location":"API_DBQUERY/","title":"Database Query API","text":"

The Database Query API provides direct, low-level access to the NetAlertX database. It allows read, write, update, and delete operations against tables, using base64-encoded SQL or structured parameters.

Warning

This API is primarily used internally to generate and render the application UI. These endpoints are low-level and powerful, and should be used with caution. Wherever possible, prefer the standard API endpoints. Invalid or unsafe queries can corrupt data. If you need data in a specific format that is not already provided, please open an issue or pull request with a clear, broadly useful use case. This helps ensure new endpoints benefit the wider community rather than relying on raw database queries.

"},{"location":"API_DBQUERY/#authentication","title":"Authentication","text":"

All /dbquery/* endpoints require an API token in the HTTP headers:

Authorization: Bearer <API_TOKEN>\n

If the token is missing or invalid:

{ \"error\": \"Forbidden\" }\n
"},{"location":"API_DBQUERY/#endpoints","title":"Endpoints","text":""},{"location":"API_DBQUERY/#1-post-dbqueryread","title":"1. POST /dbquery/read","text":"

Execute a read-only SQL query (e.g., SELECT).

"},{"location":"API_DBQUERY/#request-body","title":"Request Body","text":"
{\n  \"rawSql\": \"U0VMRUNUICogRlJPTSBERVZJQ0VT\"   // base64 encoded SQL\n}\n

Decoded SQL:

SELECT * FROM Devices;\n
"},{"location":"API_DBQUERY/#response","title":"Response","text":"
{\n  \"success\": true,\n  \"results\": [\n    { \"devMac\": \"AA:BB:CC:DD:EE:FF\", \"devName\": \"Phone\" }\n  ]\n}\n
"},{"location":"API_DBQUERY/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/read\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"rawSql\": \"U0VMRUNUICogRlJPTSBERVZJQ0VT\"\n  }'\n
"},{"location":"API_DBQUERY/#2-post-dbqueryupdate-safer-than-dbquerywrite","title":"2. POST /dbquery/update (safer than /dbquery/write)","text":"

Update rows in a table by columnName + id. /dbquery/update is parameterized to reduce the risk of SQL injection, while /dbquery/write executes raw SQL directly.

"},{"location":"API_DBQUERY/#request-body_1","title":"Request Body","text":"
{\n  \"columnName\": \"devMac\",\n  \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n  \"dbtable\": \"Devices\",\n  \"columns\": [\"devName\", \"devOwner\"],\n  \"values\": [\"Laptop\", \"Alice\"]\n}\n
"},{"location":"API_DBQUERY/#response_1","title":"Response","text":"
{ \"success\": true, \"updated_count\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_1","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/update\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"columnName\": \"devMac\",\n    \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n    \"dbtable\": \"Devices\",\n    \"columns\": [\"devName\", \"devOwner\"],\n    \"values\": [\"Laptop\", \"Alice\"]\n  }'\n
"},{"location":"API_DBQUERY/#3-post-dbquerywrite","title":"3. POST /dbquery/write","text":"

Execute a write query (INSERT, UPDATE, DELETE).

"},{"location":"API_DBQUERY/#request-body_2","title":"Request Body","text":"
{\n  \"rawSql\": \"SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk=\"\n}\n

Decoded SQL:

INSERT INTO Devices (devMac, devName, devFirstConnection, devLastConnection, devLastIP)\nVALUES ('6A:BB:4C:5D:6E', 'TestDevice', '2025-08-30 12:00:00', '2025-08-30 12:00:00', '10.0.0.10');\n
"},{"location":"API_DBQUERY/#response_2","title":"Response","text":"
{ \"success\": true, \"affected_rows\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_2","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/write\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"rawSql\": \"SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk=\"\n  }'\n
"},{"location":"API_DBQUERY/#4-post-dbquerydelete","title":"4. POST /dbquery/delete","text":"

Delete rows in a table by columnName + id.

"},{"location":"API_DBQUERY/#request-body_3","title":"Request Body","text":"
{\n  \"columnName\": \"devMac\",\n  \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n  \"dbtable\": \"Devices\"\n}\n
"},{"location":"API_DBQUERY/#response_3","title":"Response","text":"
{ \"success\": true, \"deleted_count\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_3","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"columnName\": \"devMac\",\n    \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n    \"dbtable\": \"Devices\"\n  }'\n
"},{"location":"API_DEVICE/","title":"Device API Endpoints","text":"

Manage a single device by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICE/#1-retrieve-device-details","title":"1. Retrieve Device Details","text":"
  • GET /device/<mac> Fetch all details for a single device, including:

  • Computed status (devStatus) \u2192 On-line, Off-line, or Down

  • Session and event counts (devSessions, devEvents, devDownAlerts)
  • Presence hours (devPresenceHours)
  • Children devices (devChildrenDynamic) and NIC children (devChildrenNicsDynamic)

Special case: mac=new returns a template for a new device with default values.

Response (success):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\",\n  \"devName\": \"Net - Huawei\",\n  \"devOwner\": \"Admin\",\n  \"devType\": \"Router\",\n  \"devVendor\": \"Huawei\",\n  \"devStatus\": \"On-line\",\n  \"devSessions\": 12,\n  \"devEvents\": 5,\n  \"devDownAlerts\": 1,\n  \"devPresenceHours\": 32,\n  \"devChildrenDynamic\": [...],\n  \"devChildrenNicsDynamic\": [...],\n  ...\n}\n

Error Responses:

  • Device not found \u2192 HTTP 404
  • Unauthorized \u2192 HTTP 403
"},{"location":"API_DEVICE/#2-update-device-fields","title":"2. Update Device Fields","text":"
  • POST /device/<mac> Create or update a device record.

Request Body:

{\n  \"devName\": \"New Device\",\n  \"devOwner\": \"Admin\",\n  \"createNew\": true\n}\n

Behavior:

  • If createNew=true \u2192 creates a new device
  • Otherwise \u2192 updates existing device fields

Response:

{\n  \"success\": true\n}\n

Error Responses:

  • Unauthorized \u2192 HTTP 403
"},{"location":"API_DEVICE/#3-delete-a-device","title":"3. Delete a Device","text":"
  • DELETE /device/<mac>/delete Deletes the device with the given MAC.

Response:

{\n  \"success\": true\n}\n

Error Responses:

  • Unauthorized \u2192 HTTP 403
"},{"location":"API_DEVICE/#4-delete-all-events-for-a-device","title":"4. Delete All Events for a Device","text":"
  • DELETE /device/<mac>/events/delete Removes all events associated with a device.

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#5-reset-device-properties","title":"5. Reset Device Properties","text":"
  • POST /device/<mac>/reset-props Resets the device's custom properties to default values.

Request Body: Optional JSON for additional parameters.

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#6-copy-device-data","title":"6. Copy Device Data","text":"
  • POST /device/copy Copy all data from one device to another. If a device exists with macTo, it is replaced.

Request Body:

{\n  \"macFrom\": \"AA:BB:CC:DD:EE:FF\",\n  \"macTo\": \"11:22:33:44:55:66\"\n}\n

Response:

{\n  \"success\": true,\n  \"message\": \"Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66\"\n}\n

Error Responses:

  • Missing macFrom or macTo \u2192 HTTP 400
  • Unauthorized \u2192 HTTP 403
"},{"location":"API_DEVICE/#7-update-a-single-column","title":"7. Update a Single Column","text":"
  • POST /device/<mac>/update-column Update one specific column for a device.

Request Body:

{\n  \"columnName\": \"devName\",\n  \"columnValue\": \"Updated Device Name\"\n}\n

Response (success):

{\n  \"success\": true\n}\n

Error Responses:

  • Device not found \u2192 HTTP 404
  • Missing columnName or columnValue \u2192 HTTP 400
  • Unauthorized \u2192 HTTP 403
"},{"location":"API_DEVICE/#example-curl-requests","title":"Example curl Requests","text":"

Get Device Details:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Update Device Fields:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devName\": \"New Device Name\"}'\n

Delete Device:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Copy Device Data:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/copy\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macFrom\":\"AA:BB:CC:DD:EE:FF\",\"macTo\":\"11:22:33:44:55:66\"}'\n

Update Single Column:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/update-column\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"columnName\":\"devName\",\"columnValue\":\"Updated Device\"}'\n
"},{"location":"API_DEVICES/","title":"Devices Collection API Endpoints","text":"

The Devices Collection API provides operations to retrieve, manage, import/export, and filter devices in bulk. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICES/#endpoints","title":"Endpoints","text":""},{"location":"API_DEVICES/#1-get-all-devices","title":"1. Get All Devices","text":"
  • GET /devices Retrieves all devices from the database.

Response (success):

{\n  \"success\": true,\n  \"devices\": [\n    {\n      \"devName\": \"Net - Huawei\",\n      \"devMAC\": \"AA:BB:CC:DD:EE:FF\",\n      \"devIP\": \"192.168.1.1\",\n      \"devType\": \"Router\",\n      \"devFavorite\": 0,\n      \"devStatus\": \"online\"\n    },\n    ...\n  ]\n}\n

Error Responses:

  • Unauthorized \u2192 HTTP 403
"},{"location":"API_DEVICES/#2-delete-devices-by-mac","title":"2. Delete Devices by MAC","text":"
  • DELETE /devices Deletes devices by MAC address. Supports exact matches or wildcard *.

Request Body:

{\n  \"macs\": [\"AA:BB:CC:DD:EE:FF\", \"11:22:33:*\"]\n}\n

Behavior:

  • If macs is omitted or null \u2192 deletes all devices.
  • Wildcards * match multiple devices.

Response:

{\n  \"success\": true,\n  \"deleted_count\": 5\n}\n

Error Responses:

  • Unauthorized \u2192 HTTP 403
"},{"location":"API_DEVICES/#3-delete-devices-with-empty-macs","title":"3. Delete Devices with Empty MACs","text":"
  • DELETE /devices/empty-macs Removes all devices where MAC address is null or empty.

Response:

{\n  \"success\": true,\n  \"deleted\": 3\n}\n
"},{"location":"API_DEVICES/#4-delete-unknown-devices","title":"4. Delete Unknown Devices","text":"
  • DELETE /devices/unknown Deletes devices with names marked as (unknown) or (name not found).

Response:

{\n  \"success\": true,\n  \"deleted\": 2\n}\n
"},{"location":"API_DEVICES/#5-export-devices","title":"5. Export Devices","text":"
  • GET /devices/export or /devices/export/<format> Exports all devices in CSV (default) or JSON format.

Query Parameter / URL Parameter:

  • format (optional) \u2192 csv (default) or json

CSV Response:

  • Returns as a downloadable CSV file: Content-Disposition: attachment; filename=devices.csv

JSON Response:

{\n  \"data\": [\n    { \"devName\": \"Net - Huawei\", \"devMAC\": \"AA:BB:CC:DD:EE:FF\", ... },\n    ...\n  ],\n  \"columns\": [\"devName\", \"devMAC\", \"devIP\", \"devType\", \"devFavorite\", \"devStatus\"]\n}\n

Error Responses:

  • Unsupported format \u2192 HTTP 400
"},{"location":"API_DEVICES/#6-import-devices-from-csv","title":"6. Import Devices from CSV","text":"
  • POST /devices/import Imports devices from an uploaded CSV or base64-encoded CSV content.

Request Body (multipart file or JSON with content field):

{\n  \"content\": \"<base64-encoded CSV content>\"\n}\n

Response:

{\n  \"success\": true,\n  \"inserted\": 25,\n  \"skipped_lines\": [3, 7]\n}\n

Error Responses:

  • Missing file or content \u2192 HTTP 400 / 404
  • CSV malformed \u2192 HTTP 400
"},{"location":"API_DEVICES/#7-get-device-totals","title":"7. Get Device Totals","text":"
  • GET /devices/totals Returns counts of devices by various categories.

Response:

[ \n  120,    // Total devices\n  85,     // Connected\n  5,      // Favorites\n  10,     // New\n  8,      // Down\n  12      // Archived\n]\n

Order: [all, connected, favorites, new, down, archived]

"},{"location":"API_DEVICES/#8-get-devices-by-status","title":"8. Get Devices by Status","text":"
  • GET /devices/by-status?status=<status> Returns devices filtered by status.

Query Parameter:

  • status \u2192 Supported values: online, offline, down, archived, favorites, new, my
  • If omitted, returns all devices.

Response (success):

[\n  { \"id\": \"AA:BB:CC:DD:EE:FF\", \"title\": \"Net - Huawei\", \"favorite\": 0 },\n  { \"id\": \"11:22:33:44:55:66\", \"title\": \"\u2605 USG Firewall\", \"favorite\": 1 }\n]\n

If devFavorite=1, the title is prepended with a star \u2605.

"},{"location":"API_DEVICES/#example-curl-requests","title":"Example curl Requests","text":"

Get All Devices:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Devices by MAC:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macs\":[\"AA:BB:CC:DD:EE:FF\",\"11:22:33:*\"]}'\n

Export Devices CSV:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/export?format=csv\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Import Devices from CSV:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/devices/import\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -F \"file=@devices.csv\"\n

Get Devices by Status:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/by-status?status=online\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_EVENTS/","title":"Events API Endpoints","text":"

The Events API provides access to device event logs, allowing creation, retrieval, deletion, and summary of events over time.

"},{"location":"API_EVENTS/#endpoints","title":"Endpoints","text":""},{"location":"API_EVENTS/#1-create-event","title":"1. Create Event","text":"
  • POST /events/create/<mac> Create an event for a device identified by its MAC address.

Request Body (JSON):

{\n  \"ip\": \"192.168.1.10\",\n  \"event_type\": \"Device Down\",\n  \"additional_info\": \"Optional info about the event\",\n  \"pending_alert\": 1,\n  \"event_time\": \"2025-08-24T12:00:00Z\"\n}\n
  • Parameters:

  • ip (string, optional): IP address of the device

  • event_type (string, optional): Type of event (default \"Device Down\")
  • additional_info (string, optional): Extra information
  • pending_alert (int, optional): 1 if alert email is pending (default 1)
  • event_time (ISO datetime, optional): Event timestamp; defaults to current time

Response (JSON):

{\n  \"success\": true,\n  \"message\": \"Event created for 00:11:22:33:44:55\"\n}\n
"},{"location":"API_EVENTS/#2-get-events","title":"2. Get Events","text":"
  • GET /events Retrieve all events, optionally filtered by MAC address:
/events?mac=<mac>\n

Response:

{\n  \"success\": true,\n  \"events\": [\n    {\n      \"eve_MAC\": \"00:11:22:33:44:55\",\n      \"eve_IP\": \"192.168.1.10\",\n      \"eve_DateTime\": \"2025-08-24T12:00:00Z\",\n      \"eve_EventType\": \"Device Down\",\n      \"eve_AdditionalInfo\": \"\",\n      \"eve_PendingAlertEmail\": 1\n    }\n  ]\n}\n
"},{"location":"API_EVENTS/#3-delete-events","title":"3. Delete Events","text":"
  • DELETE /events/<mac> \u2192 Delete events for a specific MAC
  • DELETE /events \u2192 Delete all events
  • DELETE /events/<days> \u2192 Delete events older than N days

Response:

{\n  \"success\": true,\n  \"message\": \"Deleted events older than <days> days\"\n}\n
"},{"location":"API_EVENTS/#4-event-totals-over-a-period","title":"4. Event Totals Over a Period","text":"
  • GET /sessions/totals?period=<period> Return event and session totals over a given period.

Query Parameters:

Parameter Description period Time period for totals, e.g., \"7 days\", \"1 month\", \"1 year\", \"100 years\"

Sample Response (JSON Array):

[120, 85, 5, 10, 3, 7]\n

Meaning of Values:

  1. Total events in the period
  2. Total sessions
  3. Missing sessions
  4. Voided events (eve_EventType LIKE 'VOIDED%')
  5. New device events (eve_EventType LIKE 'New Device')
  6. Device down events (eve_EventType LIKE 'Device Down')
"},{"location":"API_EVENTS/#notes","title":"Notes","text":"
  • All endpoints require authorization (Bearer token). Unauthorized requests return:
{ \"error\": \"Forbidden\" }\n
  • Events are stored in the Events table with the following fields: eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail.

  • Event creation automatically logs activity for debugging.

"},{"location":"API_EVENTS/#example-curl-requests","title":"Example curl Requests","text":"

Create Event:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/events/create/00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\n    \"ip\": \"192.168.1.10\",\n    \"event_type\": \"Device Down\",\n    \"additional_info\": \"Power outage\",\n    \"pending_alert\": 1\n  }'\n

Get Events for a Device:

curl \"http://<server_ip>:<GRAPHQL_PORT>/events?mac=00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Events Older Than 30 Days:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/events/30\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Get Event Totals for 7 Days:

curl \"http://<server_ip>:<GRAPHQL_PORT>/sessions/totals?period=7 days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_GRAPHQL/","title":"GraphQL API Endpoint","text":"

GraphQL queries are read-optimized for speed. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:

  • Devices
  • Settings
  • Language Strings (LangStrings)
"},{"location":"API_GRAPHQL/#endpoints","title":"Endpoints","text":"
  • GET /graphql Returns a simple status message (useful for browser or debugging).

  • POST /graphql Execute GraphQL queries against the devicesSchema.

"},{"location":"API_GRAPHQL/#devices-query","title":"Devices Query","text":""},{"location":"API_GRAPHQL/#sample-query","title":"Sample Query","text":"
query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#query-parameters","title":"Query Parameters","text":"Parameter Description page Page number of results to fetch. limit Number of results per page. sort Sorting options (field = field name, order = asc or desc). search Term to filter devices. status Filter devices by status: my_devices, connected, favorites, new, down, archived, offline. filters Additional filters (array of { filterColumn, filterValue })."},{"location":"API_GRAPHQL/#curl-example","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response","title":"Sample Response","text":"
{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 1\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#settings-query","title":"Settings Query","text":"

The settings query provides access to NetAlertX configuration stored in the settings table.

"},{"location":"API_GRAPHQL/#sample-query_1","title":"Sample Query","text":"
query GetSettings {\n  settings {\n    settings {\n      setKey\n      setName\n      setDescription\n      setType\n      setOptions\n      setGroup\n      setValue\n      setEvents\n      setOverriddenByEnv\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#schema-fields","title":"Schema Fields","text":"Field Type Description setKey String Unique key identifier for the setting. setName String Human-readable name. setDescription String Description or documentation of the setting. setType String Data type (string, int, bool, json, etc.). setOptions String Available options (for dropdown/select-type settings). setGroup String Group/category the setting belongs to. setValue String Current value of the setting. setEvents String Events or triggers related to this setting. setOverriddenByEnv Boolean Whether the setting is overridden by an environment variable at runtime."},{"location":"API_GRAPHQL/#curl-example_1","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }\"\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response_1","title":"Sample Response","text":"
{\n  \"data\": {\n    \"settings\": {\n      \"settings\": [\n        {\n          \"setKey\": \"UI_MY_DEVICES\",\n          \"setName\": \"My Devices Filter\",\n          \"setDescription\": \"Defines which statuses to include in the 'My Devices' view.\",\n          \"setType\": \"list\",\n          \"setOptions\": \"[\\\"online\\\",\\\"new\\\",\\\"down\\\",\\\"offline\\\",\\\"archived\\\"]\",\n          \"setGroup\": \"UI\",\n          \"setValue\": \"[\\\"online\\\",\\\"new\\\"]\",\n          \"setEvents\": null,\n          \"setOverriddenByEnv\": false\n        },\n        {\n          \"setKey\": \"NETWORK_DEVICE_TYPES\",\n          \"setName\": \"Network Device Types\",\n          \"setDescription\": \"Types of devices considered as network infrastructure.\",\n          \"setType\": \"list\",\n          \"setOptions\": \"[\\\"Router\\\",\\\"Switch\\\",\\\"AP\\\"]\",\n          \"setGroup\": \"Network\",\n          \"setValue\": \"[\\\"Router\\\",\\\"Switch\\\"]\",\n          \"setEvents\": null,\n          \"setOverriddenByEnv\": true\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#langstrings-query","title":"LangStrings Query","text":"

The LangStrings query provides access to localized strings. Supports filtering by langCode and langStringKey. If the requested string is missing or empty, you can optionally fallback to en_us.

"},{"location":"API_GRAPHQL/#sample-query_2","title":"Sample Query","text":"
query GetLangStrings {\n  langStrings(langCode: \"de_de\", langStringKey: \"settings_other_scanners\") {\n    langStrings {\n      langCode\n      langStringKey\n      langStringText\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#query-parameters_1","title":"Query Parameters","text":"Parameter Type Description langCode String Optional language code (e.g., en_us, de_de). If omitted, all languages are returned. langStringKey String Optional string key to retrieve a specific entry. fallback_to_en Boolean Optional (default true). If true, empty or missing strings fallback to en_us."},{"location":"API_GRAPHQL/#curl-example_2","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetLangStrings { langStrings(langCode: \\\"de_de\\\", langStringKey: \\\"settings_other_scanners\\\") { langStrings { langCode langStringKey langStringText } count } }\"\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response_2","title":"Sample Response","text":"
{\n  \"data\": {\n    \"langStrings\": {\n      \"count\": 1,\n      \"langStrings\": [\n        {\n          \"langCode\": \"de_de\",\n          \"langStringKey\": \"settings_other_scanners\",\n          \"langStringText\": \"Other, non-device scanner plugins that are currently enabled.\"  // falls back to en_us if empty\n        }\n      ]\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#notes","title":"Notes","text":"
  • Device, settings, and LangStrings queries can be combined in one request since GraphQL supports batching.
  • The fallback_to_en feature ensures UI always has a value even if a translation is missing.
  • Data is cached in memory per JSON file; changes to language or plugin files will only refresh after the cache detects a file modification.
  • The setOverriddenByEnv flag helps identify setting values that are locked at container runtime.
  • The schema is read-only \u2014 updates must be performed through other APIs or configuration management. See the other API endpoints for details.
"},{"location":"API_LOGS/","title":"Logs API Endpoints","text":"

Manage or purge application log files stored under /app/log and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.

Only specific, pre-approved log files can be purged for security and stability reasons.

"},{"location":"API_LOGS/#delete-purge-a-log-file","title":"Delete (Purge) a Log File","text":"
  • DELETE /logs?file=<log_file> \u2192 Purge the contents of an allowed log file.

Query Parameter:

  • file \u2192 The name of the log file to purge (e.g., app.log, stdout.log)

Allowed Files:

app.log\napp_front.log\nIP_changes.log\nstdout.log\nstderr.log\napp.php_errors.log\nexecution_queue.log\ndb_is_locked.log\n

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_LOGS/#curl-example-success","title":"curl Example (Success)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"message\": \"[clean_log] File app.log purged successfully\"\n}\n
"},{"location":"API_LOGS/#curl-example-not-allowed","title":"curl Example (Not Allowed)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=not_allowed.log' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": false,\n  \"message\": \"[clean_log] File not_allowed.log is not allowed to be purged\"\n}\n
"},{"location":"API_LOGS/#curl-example-unauthorized","title":"curl Example (Unauthorized)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_LOGS/#add-an-action-to-the-execution-queue","title":"Add an Action to the Execution Queue","text":"
  • POST /logs/add-to-execution-queue \u2192 Add a system action to the execution queue.

Request Body (JSON):

{\n  \"action\": \"update_api|devices\"\n}\n

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_LOGS/#curl-example-success_1","title":"curl Example (Success)","text":"

The below will update the API cache for Devices

curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\"action\": \"update_api|devices\"}'\n

Response:

{\n  \"success\": true,\n  \"message\": \"[UserEventsQueueInstance] Action \\\"update_api|devices\\\" added to the execution queue.\"\n}\n
"},{"location":"API_LOGS/#curl-example-missing-parameter","title":"curl Example (Missing Parameter)","text":"
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{}'\n

Response:

{\n  \"success\": false,\n  \"message\": \"Missing parameters\",\n  \"error\": \"Missing required 'action' field in JSON body\"\n}\n
"},{"location":"API_LOGS/#curl-example-unauthorized_1","title":"curl Example (Unauthorized)","text":"
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\"action\": \"update_api|devices\"}'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_LOGS/#notes","title":"Notes","text":"
  • Only predefined files in /app/log can be purged \u2014 arbitrary paths are not permitted.
  • When a log file is purged:

  • Its content is replaced with a short marker text: \"File manually purged\".

  • A backend log entry is created via mylog().
  • A frontend notification is generated via write_notification().
  • Execution queue actions are appended to execution_queue.log and can be processed asynchronously by background tasks or workflows.
  • Unauthorized or invalid attempts are safely logged and rejected.
  • For advanced log retrieval, analysis, or structured querying, use the frontend log viewer.
  • Always ensure that sensitive or production logs are handled carefully \u2014 purging cannot be undone.
"},{"location":"API_MESSAGING_IN_APP/","title":"In-app Notifications API","text":"

Manage in-app notifications for users. Notifications can be written, retrieved, marked as read, or deleted.

"},{"location":"API_MESSAGING_IN_APP/#write-notification","title":"Write Notification","text":"
  • POST /messaging/in-app/write \u2192 Create a new in-app notification.

Request Body:

json { \"content\": \"This is a test notification\", \"level\": \"alert\" // optional, [\"interrupt\",\"info\",\"alert\"] default: \"alert\" }

Response:

json { \"success\": true }

"},{"location":"API_MESSAGING_IN_APP/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/write\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"content\": \"This is a test notification\",\n    \"level\": \"alert\"\n  }'\n
"},{"location":"API_MESSAGING_IN_APP/#get-unread-notifications","title":"Get Unread Notifications","text":"
  • GET /messaging/in-app/unread \u2192 Retrieve all unread notifications.

Response:

json [ { \"timestamp\": \"2025-10-10T12:34:56\", \"guid\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\", \"read\": 0, \"level\": \"alert\", \"content\": \"This is a test notification\" } ]

"},{"location":"API_MESSAGING_IN_APP/#curl-example_1","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/unread\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#mark-all-notifications-as-read","title":"Mark All Notifications as Read","text":"
  • POST /messaging/in-app/read/all \u2192 Mark all notifications as read.

Response:

json { \"success\": true }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_2","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/all\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#mark-single-notification-as-read","title":"Mark Single Notification as Read","text":"
  • POST /messaging/in-app/read/<guid> \u2192 Mark a single notification as read using its GUID.

Response (success):

json { \"success\": true }

Response (failure):

json { \"success\": false, \"error\": \"Notification not found\" }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_3","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/f47ac10b-58cc-4372-a567-0e02b2c3d479\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#delete-all-notifications","title":"Delete All Notifications","text":"
  • DELETE /messaging/in-app/delete \u2192 Remove all notifications from the system.

Response:

json { \"success\": true }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_4","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#delete-single-notification","title":"Delete Single Notification","text":"
  • DELETE /messaging/in-app/delete/<guid> \u2192 Remove a single notification by its GUID.

Response (success):

json { \"success\": true }

Response (failure):

json { \"success\": false, \"error\": \"Notification not found\" }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_5","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete/f47ac10b-58cc-4372-a567-0e02b2c3d479\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_METRICS/","title":"Metrics API Endpoint","text":"

The /metrics endpoint exposes Prometheus-compatible metrics for NetAlertX, including aggregate device counts and per-device status.

"},{"location":"API_METRICS/#endpoint-details","title":"Endpoint Details","text":"
  • GET /metrics \u2192 Returns metrics in plain text.
  • Host: NetAlertX server
  • Port: As configured in GRAPHQL_PORT (default: 20212)
"},{"location":"API_METRICS/#example-output","title":"Example Output","text":"
netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_METRICS/#metrics-overview","title":"Metrics Overview","text":""},{"location":"API_METRICS/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"Metric Description netalertx_connected_devices Devices currently connected netalertx_offline_devices Devices currently offline netalertx_down_devices Down/unreachable devices netalertx_new_devices Recently detected devices netalertx_archived_devices Archived devices netalertx_favorite_devices User-marked favorites netalertx_my_devices Devices associated with the current user"},{"location":"API_METRICS/#2-per-device-status","title":"2. Per-Device Status","text":"

Metric: netalertx_device_status Each device has labels:

  • device: friendly name
  • mac: MAC address (or placeholder)
  • ip: last recorded IP
  • vendor: manufacturer or \"None\"
  • first_connection: timestamp of first detection
  • last_connection: most recent contact
  • dev_type: device type/category
  • device_status: current status (Online, Offline, Archived, Down, \u2026)

Metric value is always 1 (presence indicator).

"},{"location":"API_METRICS/#querying-with-curl","title":"Querying with curl","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace placeholders:

  • <server_ip> \u2013 NetAlertX host IP/hostname
  • <GRAPHQL_PORT> \u2013 configured port (default 20212)
  • <API_TOKEN> \u2013 your API token
"},{"location":"API_METRICS/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_METRICS/#grafana-dashboard-template","title":"Grafana Dashboard Template","text":"

Sample template JSON: Download

"},{"location":"API_NETTOOLS/","title":"Net Tools API Endpoints","text":"

The Net Tools API provides network diagnostic utilities, including Wake-on-LAN, traceroute, speed testing, DNS resolution, nmap scanning, and internet connection information.

All endpoints require authorization via Bearer token.

"},{"location":"API_NETTOOLS/#endpoints","title":"Endpoints","text":""},{"location":"API_NETTOOLS/#1-wake-on-lan","title":"1. Wake-on-LAN","text":"
  • POST /nettools/wakeonlan Sends a Wake-on-LAN packet to wake a device.

Request Body (JSON):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"message\": \"WOL packet sent\",\n  \"output\": \"Sent magic packet to AA:BB:CC:DD:EE:FF\"\n}\n

Error Responses:

  • Invalid MAC address \u2192 HTTP 400
  • Command failure \u2192 HTTP 500
"},{"location":"API_NETTOOLS/#2-traceroute","title":"2. Traceroute","text":"
  • POST /nettools/traceroute Performs a traceroute to a specified IP address.

Request Body:

{\n  \"devLastIP\": \"192.168.1.1\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": \"traceroute output as string\"\n}\n

Error Responses:

  • Invalid IP \u2192 HTTP 400
  • Traceroute command failure \u2192 HTTP 500
"},{"location":"API_NETTOOLS/#3-speedtest","title":"3. Speedtest","text":"
  • GET /nettools/speedtest Runs an internet speed test using speedtest-cli.

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Ping: 15 ms\",\n    \"Download: 120.5 Mbit/s\",\n    \"Upload: 22.4 Mbit/s\"\n  ]\n}\n

Error Responses:

  • Command failure \u2192 HTTP 500
"},{"location":"API_NETTOOLS/#4-dns-lookup-nslookup","title":"4. DNS Lookup (nslookup)","text":"
  • POST /nettools/nslookup Resolves an IP address or hostname using nslookup.

Request Body:

{\n  \"devLastIP\": \"8.8.8.8\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Server: 8.8.8.8\",\n    \"Address: 8.8.8.8#53\",\n    \"Name: google-public-dns-a.google.com\"\n  ]\n}\n

Error Responses:

  • Missing or invalid devLastIP \u2192 HTTP 400
  • Command failure \u2192 HTTP 500
"},{"location":"API_NETTOOLS/#5-nmap-scan","title":"5. Nmap Scan","text":"
  • POST /nettools/nmap Runs an nmap scan on a target IP address or range.

Request Body:

{\n  \"scan\": \"192.168.1.0/24\",\n  \"mode\": \"fast\"\n}\n

Supported Modes:

Mode nmap Arguments fast -F normal default detail -A skipdiscovery -Pn

Response (success):

{\n  \"success\": true,\n  \"mode\": \"fast\",\n  \"ip\": \"192.168.1.0/24\",\n  \"output\": [\n    \"Starting Nmap 7.91\",\n    \"Host 192.168.1.1 is up\",\n    \"... scan results ...\"\n  ]\n}\n

Error Responses:

  • Invalid IP \u2192 HTTP 400
  • Invalid mode \u2192 HTTP 400
  • Command failure \u2192 HTTP 500
"},{"location":"API_NETTOOLS/#6-internet-connection-info","title":"6. Internet Connection Info","text":"
  • GET /nettools/internetinfo Fetches public internet connection information using ipinfo.io.

Response (success):

{\n  \"success\": true,\n  \"output\": \"IP: 203.0.113.5 City: Sydney Country: AU Org: Example ISP\"\n}\n

Error Responses:

  • Failed request or empty response \u2192 HTTP 500
"},{"location":"API_NETTOOLS/#example-curl-requests","title":"Example curl Requests","text":"

Wake-on-LAN:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/wakeonlan\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devMac\":\"AA:BB:CC:DD:EE:FF\"}'\n

Traceroute:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/traceroute\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"192.168.1.1\"}'\n

Speedtest:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/speedtest\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Nslookup:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nslookup\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"8.8.8.8\"}'\n

Nmap Scan:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nmap\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"scan\":\"192.168.1.0/24\",\"mode\":\"fast\"}'\n

Internet Info:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/internetinfo\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_OLD/","title":"[Deprecated] API endpoints","text":"

Warning

Some of these endpoints will be deprecated soon. Please refere to the new API endpoints docs for details on the new API layer.

NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the API_TOKEN settings as authorization bearer, for example:

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#api-endpoint-graphql","title":"API Endpoint: GraphQL","text":"
  • Endpoint URL: php/server/query_graphql.php
  • Host: same as front end (web ui)
  • Port: 20212 or as defined by the GRAPHQL_PORT setting
"},{"location":"API_OLD/#example-query-to-fetch-devices","title":"Example Query to Fetch Devices","text":"

First, let's define the GraphQL query to fetch devices with pagination and sorting options.

query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n

See also: Debugging GraphQL issues

"},{"location":"API_OLD/#curl-command","title":"curl Command","text":"

You can use the following curl command to execute the query.

curl 'http://host:GRAPHQL_PORT/graphql'   -X POST   -H 'Authorization: Bearer API_TOKEN'  -H 'Content-Type: application/json'   --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#explanation","title":"Explanation:","text":"
  1. GraphQL Query:
  2. The query parameter contains the GraphQL query as a string.
  3. The variables parameter contains the input variables for the query.

  4. Query Variables:

  5. page: Specifies the page number of results to fetch.
  6. limit: Specifies the number of results per page.
  7. sort: Specifies the sorting options, with field being the field to sort by and order being the sort order (asc for ascending or desc for descending).
  8. search: A search term to filter the devices.
  9. status: The status filter to apply (valid values are my_devices (determined by the UI_MY_DEVICES setting), connected, favorites, new, down, archived, offline).

  10. curl Command:

  11. The -X POST option specifies that we are making a POST request.
  12. The -H \"Content-Type: application/json\" option sets the content type of the request to JSON.
  13. The -d option provides the request payload, which includes the GraphQL query and variables.
"},{"location":"API_OLD/#sample-response","title":"Sample Response","text":"

The response will be in JSON format, similar to the following:

{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        },\n        {\n          \"rowid\": 2,\n          \"devMac\": \"66:77:88:99:AA:BB\",\n          \"devName\": \"Device 2\",\n          \"devOwner\": \"Owner 2\",\n          \"devType\": \"Type 2\",\n          \"devVendor\": \"Vendor 2\",\n          \"devLastConnection\": \"2025-01-02T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_OLD/#api-endpoint-json-files","title":"API Endpoint: JSON files","text":"

This API endpoint retrieves static files, that are periodically updated.

  • Endpoint URL: php/server/query_json.php?file=<file name>
  • Host: same as front end (web ui)
  • Port: 20211 or as defined by the $PORT docker environment variable (same as the port for the web ui)
"},{"location":"API_OLD/#when-are-the-endpoints-updated","title":"When are the endpoints updated","text":"

The endpoints are updated when objects in the API endpoints are changed.

"},{"location":"API_OLD/#location-of-the-endpoints","title":"Location of the endpoints","text":"

In the container, these files are located under the API directory (default: /tmp/api/, configurable via NETALERTX_API environment variable). You can access them via the /php/server/query_json.php?file=user_notifications.json endpoint.

"},{"location":"API_OLD/#available-endpoints","title":"Available endpoints","text":"

You can access the following files:

File name Description notification_json_final.json The json version of the last notification (e.g. used for webhooks - sample JSON). table_devices.json All of the available Devices detected by the app. table_plugins_events.json The list of the unprocessed (pending) notification events (plugins_events DB table). table_plugins_history.json The list of notification events history. table_plugins_objects.json The content of the plugins_objects table. Find more info on the Plugin system here language_strings.json The content of the language_strings table, which in turn is loaded from the plugins config.json definitions. table_custom_endpoint.json A custom endpoint generated by the SQL query specified by the API_CUSTOM_SQL setting. table_settings.json The content of the settings table. app_state.json Contains the current application state."},{"location":"API_OLD/#json-data-format","title":"JSON Data format","text":"

The endpoints starting with the table_ prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:

{\n  \"data\": [\n        {\n          \"db_column_name\": \"data\",\n          \"db_column_name2\": \"data2\"      \n        }, \n        {\n          \"db_column_name\": \"data3\",\n          \"db_column_name2\": \"data4\" \n        }\n    ]\n}\n\n

Example JSON of the table_devices.json endpoint with two Devices (database rows):

{\n  \"data\": [\n        {\n          \"devMac\": \"Internet\",\n          \"devName\": \"Net - Huawei\",\n          \"devType\": \"Router\",\n          \"devVendor\": null,\n          \"devGroup\": \"Always on\",\n          \"devFirstConnection\": \"2021-01-01 00:00:00\",\n          \"devLastConnection\": \"2021-01-28 22:22:11\",\n          \"devLastIP\": \"192.168.1.24\",\n          \"devStaticIP\": 0,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2023-01-28 22:22:28.998715\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"\",\n          \"devParentPort\": \"\",\n          \"devIcon\": \"globe\"\n        }, \n        {\n          \"devMac\": \"a4:8f:ff:aa:ba:1f\",\n          \"devName\": \"Net - USG\",\n          \"devType\": \"Firewall\",\n          \"devVendor\": \"Ubiquiti Inc\",\n          \"devGroup\": \"\",\n          \"devFirstConnection\": \"2021-02-12 22:05:00\",\n          \"devLastConnection\": \"2021-07-17 15:40:00\",\n          \"devLastIP\": \"192.168.1.1\",\n          \"devStaticIP\": 1,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2021-07-17 15:40:10.667717\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"Internet\",\n          \"devParentPort\": 1,\n          \"devIcon\": \"shield-halved\"\n      }\n    ]\n}\n\n
"},{"location":"API_OLD/#api-endpoint-prometheus-exporter","title":"API Endpoint: Prometheus Exporter","text":"
  • Endpoint URL: /metrics
  • Host: (where NetAlertX exporter is running)
  • Port: as configured in the GRAPHQL_PORT setting (20212 by default)
"},{"location":"API_OLD/#example-output-of-the-metrics-endpoint","title":"Example Output of the /metrics Endpoint","text":"

Below is a representative snippet of the metrics you may find when querying the /metrics endpoint for netalertx. It includes both aggregate counters and device_status labels per device.

netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_OLD/#metrics-explanation","title":"Metrics Explanation","text":""},{"location":"API_OLD/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"

Metric names prefixed with netalertx_ provide aggregated counts by device status:

  • netalertx_connected_devices: number of devices currently connected
  • netalertx_offline_devices: devices currently offline
  • netalertx_down_devices: down/unreachable devices
  • netalertx_new_devices: devices recently detected
  • netalertx_archived_devices: archived devices
  • netalertx_favorite_devices: user-marked favorite devices
  • netalertx_my_devices: devices associated with the current user context

These numeric values give a high-level overview of device distribution.

"},{"location":"API_OLD/#2-perdevice-status-with-labels","title":"2. Per\u2011Device Status with Labels","text":"

Each individual device is represented by a netalertx_device_status metric, with descriptive labels:

  • device: friendly name of the device
  • mac: MAC address (or placeholder)
  • ip: last recorded IP address
  • vendor: manufacturer or \"None\" if unknown
  • first_connection: timestamp when the device was first observed
  • last_connection: most recent contact timestamp
  • dev_type: device category or type
  • device_status: current status (Online / Offline / Archived / Down / ...)

The metric value is always 1 (indicating presence or active state) and the combination of labels identifies the device.

"},{"location":"API_OLD/#how-to-query-with-curl","title":"How to Query with curl","text":"

To fetch the metrics from the NetAlertX exporter:

curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace:

  • <server_ip>: IP or hostname of the NetAlertX server
  • <GRAPHQL_PORT>: port specified in your GRAPHQL_PORT setting (default: 20212)
  • <API_TOKEN> your Bearer token from the API_TOKEN setting
"},{"location":"API_OLD/#summary","title":"Summary","text":"
  • Endpoint: /metrics provides both summary counters and per-device status entries.
  • Aggregate metrics help monitor overall device states.
  • Detailed metrics expose each device\u2019s metadata via labels.
  • Use case: feed into Prometheus for scraping, monitoring, alerting, or charting dashboard views.
"},{"location":"API_OLD/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_OLD/#grafana-template","title":"Grafana template","text":"

Grafana template sample: Download json

"},{"location":"API_OLD/#api-endpoint-log-files","title":"API Endpoint: /log files","text":"

This API endpoint retrieves files from the /tmp/log folder.

  • Endpoint URL: php/server/query_logs.php?file=<file name>
  • Host: same as front end (web ui)
  • Port: 20211 or as defined by the $PORT docker environment variable (same as the port for the web ui)
File Description IP_changes.log Logs of IP address changes app.log Main application log app.php_errors.log PHP error log app_front.log Frontend application log app_nmap.log Logs of Nmap scan results db_is_locked.log Logs when the database is locked execution_queue.log Logs of execution queue activities plugins/ Directory for temporary plugin-related files (not accessible) report_output.html HTML report output report_output.json JSON format report output report_output.txt Text format report output stderr.log Logs of standard error output stdout.log Logs of standard output"},{"location":"API_OLD/#api-endpoint-config-files","title":"API Endpoint: /config files","text":"

To retrieve files from the /data/config folder.

  • Endpoint URL: php/server/query_config.php?file=<file name>
  • Host: same as front end (web ui)
  • Port: 20211 or as defined by the $PORT docker environment variable (same as the port for the web ui)
File Description devices.csv Devices csv file app.conf Application config file"},{"location":"API_ONLINEHISTORY/","title":"Online History API Endpoints","text":"

Manage the online history records of devices. Currently, the API supports deletion of all history entries. All endpoints require authorization.

"},{"location":"API_ONLINEHISTORY/#1-delete-online-history","title":"1. Delete Online History","text":"
  • DELETE /history Remove all records from the online history table (Online_History). This operation cannot be undone.

Response (success):

{\n  \"success\": true,\n  \"message\": \"Deleted online history\"\n}\n

Error Responses:

  • Unauthorized \u2192 HTTP 403
"},{"location":"API_ONLINEHISTORY/#example-curl-request","title":"Example curl Request","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/history\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_SESSIONS/","title":"Sessions API Endpoints","text":"

Track and manage device connection sessions. Sessions record when a device connects or disconnects on the network.

"},{"location":"API_SESSIONS/#create-a-session","title":"Create a Session","text":"
  • POST /sessions/create \u2192 Create a new session for a device

Request Body:

json { \"mac\": \"AA:BB:CC:DD:EE:FF\", \"ip\": \"192.168.1.10\", \"start_time\": \"2025-08-01T10:00:00\", \"end_time\": \"2025-08-01T12:00:00\", // optional \"event_type_conn\": \"Connected\", // optional, default \"Connected\" \"event_type_disc\": \"Disconnected\" // optional, default \"Disconnected\" }

Response:

json { \"success\": true, \"message\": \"Session created for MAC AA:BB:CC:DD:EE:FF\" }

"},{"location":"API_SESSIONS/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/sessions/create\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"mac\": \"AA:BB:CC:DD:EE:FF\",\n    \"ip\": \"192.168.1.10\",\n    \"start_time\": \"2025-08-01T10:00:00\",\n    \"end_time\": \"2025-08-01T12:00:00\",\n    \"event_type_conn\": \"Connected\",\n    \"event_type_disc\": \"Disconnected\"\n  }'\n\n
"},{"location":"API_SESSIONS/#delete-sessions","title":"Delete Sessions","text":"
  • DELETE /sessions/delete \u2192 Delete all sessions for a given MAC

Request Body:

json { \"mac\": \"AA:BB:CC:DD:EE:FF\" }

Response:

json { \"success\": true, \"message\": \"Deleted sessions for MAC AA:BB:CC:DD:EE:FF\" }

"},{"location":"API_SESSIONS/#curl-example_1","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/sessions/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"mac\": \"AA:BB:CC:DD:EE:FF\"\n  }'\n
"},{"location":"API_SESSIONS/#list-sessions","title":"List Sessions","text":"
  • GET /sessions/list \u2192 Retrieve sessions optionally filtered by device and date range

Query Parameters:

  • mac (optional) \u2192 Filter by device MAC address
  • start_date (optional) \u2192 Filter sessions starting from this date (YYYY-MM-DD)
  • end_date (optional) \u2192 Filter sessions ending by this date (YYYY-MM-DD)

Example:

/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21

Response:

json { \"success\": true, \"sessions\": [ { \"ses_MAC\": \"AA:BB:CC:DD:EE:FF\", \"ses_Connection\": \"2025-08-01 10:00\", \"ses_Disconnection\": \"2025-08-01 12:00\", \"ses_Duration\": \"2h 0m\", \"ses_IP\": \"192.168.1.10\", \"ses_Info\": \"\" } ] }

"},{"location":"API_SESSIONS/#curl-example_2","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#calendar-view-of-sessions","title":"Calendar View of Sessions","text":"
  • GET /sessions/calendar \u2192 View sessions in calendar format

Query Parameters:

  • start \u2192 Start date (YYYY-MM-DD)
  • end \u2192 End date (YYYY-MM-DD)

Example:

/sessions/calendar?start=2025-08-01&end=2025-08-21

Response:

json { \"success\": true, \"sessions\": [ { \"resourceId\": \"AA:BB:CC:DD:EE:FF\", \"title\": \"\", \"start\": \"2025-08-01T10:00:00\", \"end\": \"2025-08-01T12:00:00\", \"color\": \"#00a659\", \"tooltip\": \"Connection: 2025-08-01 10:00\\nDisconnection: 2025-08-01 12:00\\nIP: 192.168.1.10\", \"className\": \"no-border\" } ] }

"},{"location":"API_SESSIONS/#curl-example_3","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/calendar?start=2025-08-01&end=2025-08-21\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#device-sessions","title":"Device Sessions","text":"
  • GET /sessions/<mac> \u2192 Retrieve sessions for a specific device

Query Parameters:

  • period \u2192 Period to retrieve sessions (1 day, 7 days, 1 month, etc.) Default: 1 day

Example:

/sessions/AA:BB:CC:DD:EE:FF?period=7 days

Response:

json { \"success\": true, \"sessions\": [ { \"ses_MAC\": \"AA:BB:CC:DD:EE:FF\", \"ses_Connection\": \"2025-08-01 10:00\", \"ses_Disconnection\": \"2025-08-01 12:00\", \"ses_Duration\": \"2h 0m\", \"ses_IP\": \"192.168.1.10\", \"ses_Info\": \"\" } ] }

"},{"location":"API_SESSIONS/#curl-example_4","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/AA:BB:CC:DD:EE:FF?period=7%20days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#session-events-summary","title":"Session Events Summary","text":"
  • GET /sessions/session-events \u2192 Retrieve a summary of session events

Query Parameters:

  • type \u2192 Event type (all, sessions, missing, voided, new, down) Default: all
  • period \u2192 Period to retrieve events (7 days, 1 month, etc.)

Example:

/sessions/session-events?type=all&period=7 days

Response: Returns a list of events or sessions with formatted connection, disconnection, duration, and IP information.

"},{"location":"API_SESSIONS/#curl-example_5","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/session-events?type=all&period=7%20days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SETTINGS/","title":"Settings API Endpoints","text":"

Retrieve application settings stored in the configuration system. This endpoint is useful for quickly fetching individual settings such as API_TOKEN or TIMEZONE.

For bulk or structured access (all settings, schema details, or filtering), use the GraphQL API Endpoint.

"},{"location":"API_SETTINGS/#get-a-setting","title":"Get a Setting","text":"
  • GET /settings/<key> \u2192 Retrieve the value of a specific setting

Path Parameter:

  • key \u2192 The setting key to retrieve (e.g., API_TOKEN, TIMEZONE)

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_SETTINGS/#curl-example-success","title":"curl Example (Success)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"value\": \"my-secret-token\"\n}\n
"},{"location":"API_SETTINGS/#curl-example-invalid-key","title":"curl Example (Invalid Key)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/DOES_NOT_EXIST' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"value\": null\n}\n
"},{"location":"API_SETTINGS/#curl-example-unauthorized","title":"curl Example (Unauthorized)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_SETTINGS/#notes","title":"Notes","text":"
  • This endpoint is optimized for direct retrieval of a single setting.
  • For complex retrieval scenarios (listing all settings, retrieving schema metadata like setName, setDescription, setType, or checking if a setting is overridden by environment variables), use the GraphQL Settings Query:
curl 'http://<server_ip>:<GRAPHQL_PORT>/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }\"\n  }'\n

See the GraphQL API Endpoint for more details.

"},{"location":"API_SYNC/","title":"Sync API Endpoint","text":"

The /sync endpoint is used by the SYNC plugin to synchronize data between multiple NetAlertX instances (e.g., from a node to a hub). It supports both GET and POST requests.

"},{"location":"API_SYNC/#91-get-sync","title":"9.1 GET /sync","text":"

Fetches data from a node to the hub. The data is returned as a base64-encoded JSON file.

Example Request:

curl 'http://<server>:<GRAPHQL_PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>'\n

Response Example:

{\n  \"node_name\": \"NODE-01\",\n  \"status\": 200,\n  \"message\": \"OK\",\n  \"data_base64\": \"eyJkZXZpY2VzIjogW3siZGV2TWFjIjogIjAwOjExOjIyOjMzOjQ0OjU1IiwiZGV2TmFtZSI6ICJEZXZpY2UgMSJ9XSwgImNvdW50Ijog1fQ==\",\n  \"timestamp\": \"2025-08-24T10:15:00+10:00\"\n}\n

Notes:

  • data_base64 contains the full JSON data encoded in Base64.
  • node_name corresponds to the SYNC_node_name setting on the node.
  • Errors (e.g., missing file) return HTTP 500 with an error message.
"},{"location":"API_SYNC/#92-post-sync","title":"9.2 POST /sync","text":"

The POST endpoint is used by nodes to send data to the hub. The hub expects the data as form-encoded fields (application/x-www-form-urlencoded or multipart/form-data). The hub then stores the data in the plugin log folder for processing.

"},{"location":"API_SYNC/#required-fields","title":"Required Fields","text":"Field Type Description data string The payload from the plugin or devices. Typically plain text, JSON, or encrypted Base64 data. In your Python script, encrypt_data() is applied before sending. node_name string The name of the node sending the data. Matches the node\u2019s SYNC_node_name setting. Used to generate the filename on the hub. plugin string The name of the plugin sending the data. Determines the filename prefix (last_result.<plugin>...). file_path string (optional) Path of the local file being sent. Used only for logging/debugging purposes on the hub; not required for processing."},{"location":"API_SYNC/#how-the-hub-processes-the-post-data","title":"How the Hub Processes the POST Data","text":"
  1. Receives the data and validates the API token.
  2. Stores the raw payload in:
INSTALL_PATH/log/plugins/last_result.<plugin>.encoded.<node_name>.<sequence>.log\n
  • <plugin> \u2192 plugin name from the POST request.
  • <node_name> \u2192 node name from the POST request.
  • <sequence> \u2192 incremented number for each submission.

  • Decodes / decrypts the data if necessary (Base64 or encrypted) before processing.

  • Processes JSON payloads (e.g., device info) to:

  • Avoid duplicates by tracking devMac.

  • Add metadata like devSyncHubNode.
  • Insert new devices into the database.
  • Renames files to indicate they have been processed:
processed_last_result.<plugin>.<node_name>.<sequence>.log\n
"},{"location":"API_SYNC/#example-post-payload","title":"Example POST Payload","text":"

If a node is sending device data:

curl -X POST 'http://<hub>:<PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -F 'data={\"data\":[{\"devMac\":\"00:11:22:33:44:55\",\"devName\":\"Device 1\",\"devVendor\":\"Vendor A\",\"devLastIP\":\"192.168.1.10\"}]}' \\\n  -F 'node_name=NODE-01' \\\n  -F 'plugin=SYNC'\n
  • The data field contains JSON with a data array, where each element is a device object or plugin data object.
  • The plugin and node_name fields allow the hub to organize and store the file correctly.
  • The data is only processed if the relevant plugins are enabled and run on the target server.
"},{"location":"API_SYNC/#key-notes","title":"Key Notes","text":"
  • Always use the same plugin and node_name values for consistent storage.
  • Encrypted data: The Python script uses encrypt_data() before sending, and the hub decodes it before processing.
  • Sequence numbers: Every submission generates a new sequence, preventing overwriting previous data.
  • Form-encoded: The hub expects multipart/form-data (cURL -F) or application/x-www-form-urlencoded.

Storage Details:

  • Data is stored under INSTALL_PATH/log/plugins with filenames following the pattern:
last_result.<plugin>.encoded.<node_name>.<sequence>.log\n
  • Both encoded and decoded files are tracked, and new submissions increment the sequence number.
  • If storing fails, the API returns HTTP 500 with an error message.
  • The data is only processed if the relevant plugins are enabled and run on the target server.
"},{"location":"API_SYNC/#93-notes-and-best-practices","title":"9.3 Notes and Best Practices","text":"
  • Authorization Required \u2013 Both GET and POST require a valid API token.
  • Data Integrity \u2013 Ensure that node_name and plugin are consistent to avoid overwriting files.
  • Monitoring \u2013 Notifications are generated whenever data is sent or received (write_notification), which can be used for alerting or auditing.
  • Use Case \u2013 Typically used in multi-node deployments to consolidate device and event data on a central hub.
"},{"location":"API_TESTS/","title":"Tests","text":""},{"location":"API_TESTS/#unit-tests","title":"Unit Tests","text":"

Warning

Please note these test modify data in the database.

  1. See the /test directory for available test cases. These are not exhaustive but cover the main API endpoints.
  2. To run a test case, SSH into the container: sudo docker exec -it netalertx /bin/bash
  3. Inside the container, install pytest (if not already installed): pip install pytest
  4. Run a specific test case: pytest /app/test/TESTFILE.py
"},{"location":"AUTHELIA/","title":"Authelia","text":""},{"location":"AUTHELIA/#authelia-support","title":"Authelia support","text":"

Warning

This is community contributed content and work in progress. Contributions are welcome.

theme: dark\n\ndefault_2fa_method: \"totp\"\n\nserver:\n  address: 0.0.0.0:9091\n  endpoints:\n    enable_expvars: false\n    enable_pprof: false\n    authz:\n      forward-auth:\n        implementation: 'ForwardAuth'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      ext-authz:\n        implementation: 'ExtAuthz'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      auth-request:\n        implementation: 'AuthRequest'\n        authn_strategies:\n          - name: 'HeaderAuthRequestProxyAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      legacy:\n        implementation: 'Legacy'\n        authn_strategies:\n          - name: 'HeaderLegacy'\n          - name: 'CookieSession'\n  disable_healthcheck: false\n  tls:\n    key: \"\"\n    certificate: \"\"\n    client_certificates: []\n  headers:\n    csp_template: \"\"\n\nlog:\n  ## Level of verbosity for logs: info, debug, trace.\n  level: info\n\n###############################################################\n# The most important section\n###############################################################\naccess_control:\n  ## Default policy can either be 'bypass', 'one_factor', 'two_factor' or 'deny'.\n  default_policy: deny\n  networks:\n    - name: internal\n      networks:\n        - '192.168.0.0/18'\n        - '10.10.10.0/8' # Zerotier\n    - name: private\n      networks:\n        - '172.16.0.0/12'\n  rules:\n    - networks:\n        - private\n      domain:\n        - '*'\n      policy: bypass\n    - networks:\n        - internal\n      domain:\n        - '*'\n      policy: bypass\n    - domain:\n        # exclude itself from auth, should not happen as we use Traefik middleware on a case-by-case screnario\n        - 'auth.MYDOMAIN1.TLD'\n        - 'authelia.MYDOMAIN1.TLD'\n        - 'auth.MYDOMAIN2.TLD'\n        - 'authelia.MYDOMAIN2.TLD'\n      policy: bypass\n    - domain:\n        #All subdomains match\n        - 'MYDOMAIN1.TLD'\n        - '*.MYDOMAIN1.TLD'\n      policy: two_factor\n    - domain:\n        # This will not work yet as Authelio does not support multi-domain authentication\n        - 'MYDOMAIN2.TLD'\n        - '*.MYDOMAIN2.TLD'\n      policy: two_factor\n\n\n############################################################\nidentity_validation:\n  reset_password:\n    jwt_secret: \"[REDACTED]\"\n\nidentity_providers:\n  oidc:\n    enable_client_debug_messages: true\n    enforce_pkce: public_clients_only\n    hmac_secret: [REDACTED]\n    lifespans:\n      authorize_code: 1m\n      id_token: 1h\n      refresh_token: 90m\n      access_token: 1h\n    cors:\n      endpoints:\n        - authorization\n        - token\n        - revocation\n        - introspection\n        - userinfo\n      allowed_origins:\n        - \"*\"\n      allowed_origins_from_client_redirect_uris: false\n    jwks:\n      - key: [REDACTED]\n        certificate_chain:\n    clients:\n      - client_id: portainer\n        client_name: Portainer\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_post'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://portainer.MYDOMAIN1.LTD\n        userinfo_signed_response_alg: none\n\n      - client_id: openproject\n        client_name: OpenProject\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_basic'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://op.MYDOMAIN.TLD\n        #grant_types:\n        #  - refresh_token\n        #  - authorization_code\n        #response_types:\n        #  - code\n        #response_modes:\n        #  - form_post\n        #  - query\n        #  - fragment\n        userinfo_signed_response_alg: none\n##################################################################\n\n\ntelemetry:\n  metrics:\n    enabled: false\n    address: tcp://0.0.0.0:9959\n\ntotp:\n  disable: false\n  issuer: authelia.com\n  algorithm: sha1\n  digits: 6\n  period: 30 ## The period in seconds a one-time password is valid for.\n  skew: 1\n  secret_size: 32\n\nwebauthn:\n  disable: false\n  timeout: 60s ## Adjust the interaction timeout for Webauthn dialogues.\n  display_name: Authelia\n  attestation_conveyance_preference: indirect\n  user_verification: preferred\n\nntp:\n  address: \"pool.ntp.org\"\n  version: 4\n  max_desync: 5s\n  disable_startup_check: false\n  disable_failure: false\n\nauthentication_backend:\n  password_reset:\n    disable: false\n    custom_url: \"\"\n  refresh_interval: 5m\n  file:\n    path: /config/users_database.yml\n    watch: true\n    password:\n      algorithm: argon2\n      argon2:\n        variant: argon2id\n        iterations: 3\n        memory: 65536\n        parallelism: 4\n        key_length: 32\n        salt_length: 16\n\npassword_policy:\n  standard:\n    enabled: false\n    min_length: 8\n    max_length: 0\n    require_uppercase: true\n    require_lowercase: true\n    require_number: true\n    require_special: true\n  ## zxcvbn is a well known and used password strength algorithm. It does not have tunable settings.\n  zxcvbn:\n    enabled: false\n    min_score: 3\n\nregulation:\n  max_retries: 3\n  find_time: 2m\n  ban_time: 5m\n\nsession:\n  name: authelia_session\n  secret: [REDACTED]\n  expiration: 60m\n  inactivity: 15m\n  cookies:\n    - domain: 'MYDOMAIN1.LTD'\n      authelia_url: 'https://auth.MYDOMAIN1.LTD'\n      name: 'authelia_session'\n      default_redirection_url: 'https://MYDOMAIN1.LTD'\n    - domain: 'MYDOMAIN2.LTD'\n      authelia_url: 'https://auth.MYDOMAIN2.LTD'\n      name: 'authelia_session_other'\n      default_redirection_url: 'https://MYDOMAIN2.LTD'\n\nstorage:\n  encryption_key: [REDACTED]\n  local:\n    path: /config/db.sqlite3\n\nnotifier:\n  disable_startup_check: true\n  smtp:\n    address: MYOTHERDOMAIN.LTD:465\n    timeout: 5s\n    username: \"USER@DOMAIN\"\n    password: \"[REDACTED]\"\n    sender: \"Authelia <postmaster@MYOTHERDOMAIN.LTD>\"\n    identifier: NAME@MYOTHERDOMAIN.LTD\n    subject: \"[Authelia] {title}\"\n    startup_check_address: postmaster@MYOTHERDOMAIN.LTD\n\n
"},{"location":"BACKUPS/","title":"Backing Things Up","text":"

Note

To back up 99% of your configuration, back up at least the /data/config folder. Database definitions can change between releases, so the safest method is to restore backups using the same app version they were taken from, then upgrade incrementally.

"},{"location":"BACKUPS/#what-to-back-up","title":"What to Back Up","text":"

There are four key artifacts you can use to back up your NetAlertX configuration:

File Description Limitations /db/app.db The application database Might be in an uncommitted state or corrupted /config/app.conf Configuration file Can be overridden using the APP_CONF_OVERRIDE variable /config/devices.csv CSV file containing device data Does not include historical data /config/workflows.json JSON file containing your workflows N/A"},{"location":"BACKUPS/#where-the-data-lives","title":"Where the Data Lives","text":"

Understanding where your data is stored helps you plan your backup strategy.

"},{"location":"BACKUPS/#core-configuration","title":"Core Configuration","text":"

Stored in /data/config/app.conf. This includes settings for:

  • Notifications
  • Scanning
  • Scheduled maintenance
  • UI preferences

(See Settings System for details.)

"},{"location":"BACKUPS/#device-data","title":"Device Data","text":"

Stored in /data/config/devices_<timestamp>.csv or /data/config/devices.csv, created by the CSV Backup CSVBCKP Plugin. Contains:

  • Device names, icons, and categories
  • Network configuration
  • Custom properties
"},{"location":"BACKUPS/#historical-data","title":"Historical Data","text":"

Stored in /data/db/app.db (see Database Overview). Contains:

  • Plugin data and historical entries
  • Event and notification history
  • Device presence history
"},{"location":"BACKUPS/#backup-strategies","title":"Backup Strategies","text":"

The safest approach is to back up both the /db and /config folders regularly. Tools like Kopia make this simple and efficient.

If you can only keep a few files, prioritize:

  1. The latest devices_<timestamp>.csv or devices.csv
  2. app.conf
  3. workflows.json

You can also download the app.conf and devices.csv files from the Maintenance section:

"},{"location":"BACKUPS/#scenario-1-full-backup-and-restore","title":"Scenario 1: Full Backup and Restore","text":"

Goal: Full recovery of your configuration and data.

"},{"location":"BACKUPS/#what-to-back-up_1","title":"\ud83d\udcbe What to Back Up","text":"
  • /data/db/app.db (uncorrupted)
  • /data/config/app.conf
  • /data/config/workflows.json
"},{"location":"BACKUPS/#how-to-restore","title":"\ud83d\udce5 How to Restore","text":"

Map these files into your container as described in the Setup documentation.

"},{"location":"BACKUPS/#scenario-2-corrupted-database","title":"Scenario 2: Corrupted Database","text":"

Goal: Recover configuration and device data when the database is lost or corrupted.

"},{"location":"BACKUPS/#what-to-back-up_2","title":"\ud83d\udcbe What to Back Up","text":"
  • /data/config/app.conf
  • /data/config/workflows.json
  • /data/config/devices_<timestamp>.csv (rename to devices.csv during restore)
"},{"location":"BACKUPS/#how-to-restore_1","title":"\ud83d\udce5 How to Restore","text":"
  1. Copy app.conf and workflows.json into /data/config/
  2. Rename and place devices_<timestamp>.csv \u2192 /data/config/devices.csv
  3. Restore via the Maintenance section under Devices \u2192 Bulk Editing

This recovers nearly all configuration, workflows, and device metadata.

"},{"location":"BACKUPS/#docker-based-backup-and-restore","title":"Docker-Based Backup and Restore","text":"

For users running NetAlertX via Docker, you can back up or restore directly from your host system \u2014 a convenient and scriptable option.

"},{"location":"BACKUPS/#full-backup-file-level","title":"Full Backup (File-Level)","text":"
  1. Stop the container:

bash docker stop netalertx

  1. Create a compressed archive of your configuration and database volumes:

bash docker run --rm -v local_path/config:/config -v local_path/db:/db alpine tar -cz /config /db > netalertx-backup.tar.gz

  1. Restart the container:

bash docker start netalertx

"},{"location":"BACKUPS/#restore-from-backup","title":"Restore from Backup","text":"
  1. Stop the container:

bash docker stop netalertx

  1. Restore from your backup file:

bash docker run --rm -i -v local_path/config:/config -v local_path/db:/db alpine tar -C / -xz < netalertx-backup.tar.gz

  1. Restart the container:

bash docker start netalertx

This approach uses a temporary, minimal alpine container to access Docker-managed volumes. The tar command creates or extracts an archive directly from your host\u2019s filesystem, making it fast, clean, and reliable for both automation and manual recovery.

"},{"location":"BACKUPS/#summary","title":"Summary","text":"
  • Back up /data/config for configuration and devices; /data/db for history
  • Keep regular backups, especially before upgrades
  • For Docker setups, use the lightweight alpine-based backup method for consistency and portability
"},{"location":"BUILDS/","title":"NetAlertX Builds: Choose Your Path","text":"

NetAlertX provides different installation methods for different needs. This guide helps you choose the right path for security, experimentation, or development.

"},{"location":"BUILDS/#1-hardened-appliance-default-production","title":"1. Hardened Appliance (Default Production)","text":"

Note

Use this image if: You want to use NetAlertX securely.

"},{"location":"BUILDS/#who-is-this-for","title":"Who is this for?","text":"

All users who want a stable, secure, \"set-it-and-forget-it\" appliance.

"},{"location":"BUILDS/#methodology","title":"Methodology","text":"
  • Multi-stage Alpine build
  • Aggressively \"amputated\"
  • Locked down for max security
"},{"location":"BUILDS/#source","title":"Source","text":"

Dockerfile (hardened target)

"},{"location":"BUILDS/#2-tinkerers-image-insecure-vm-style","title":"2. \"Tinkerer's\" Image (Insecure VM-Style)","text":"

Note

Use this image if: You want to experiment with NetAlertX.

"},{"location":"BUILDS/#who-is-this-for_1","title":"Who is this for?","text":"

Power users, developers, and \"tinkerers\" wanting a familiar \"VM-like\" experience.

"},{"location":"BUILDS/#methodology_1","title":"Methodology","text":"
  • Traditional Debian build
  • Includes full un-hardened OS
  • Contains apt, sudo, git
"},{"location":"BUILDS/#source_1","title":"Source","text":"

Dockerfile.debian

"},{"location":"BUILDS/#3-contributors-devcontainer-project-developers","title":"3. Contributor's Devcontainer (Project Developers)","text":"

Note

Use this image if: You want to develop NetAlertX itself.

"},{"location":"BUILDS/#who-is-this-for_2","title":"Who is this for?","text":"

Project contributors who are actively writing and debugging code for NetAlertX.

"},{"location":"BUILDS/#methodology_2","title":"Methodology","text":"
  • Builds FROM runner stage
  • Loaded by VS Code
  • Full debug tools: xdebug, pytest
"},{"location":"BUILDS/#source_2","title":"Source","text":"

Dockerfile (devcontainer target)

"},{"location":"BUILDS/#visualizing-the-trade-offs","title":"Visualizing the Trade-Offs","text":"

This chart compares the three builds across key attributes. A higher score means \"more of\" that attribute. Notice the clear trade-offs between security and development features.

"},{"location":"BUILDS/#build-process-origins","title":"Build Process & Origins","text":"

The final images originate from two different files and build paths. The main Dockerfile uses stages to create both the hardened and development container images.

"},{"location":"BUILDS/#official-build-path","title":"Official Build Path","text":"

Dockerfile -> builder (Stage 1) -> runner (Stage 2) -> hardened (Final Stage) (Production Image) + devcontainer (Final Stage) (Developer Image)

"},{"location":"BUILDS/#legacy-build-path","title":"Legacy Build Path","text":"

Dockerfile.debian -> \"Tinkerer's\" Image (Insecure VM-Style Image)

"},{"location":"COMMON_ISSUES/","title":"Troubleshooting Common Issues","text":"

Tip

Before troubleshooting, ensure you have set the correct Debugging and LOG_LEVEL.

"},{"location":"COMMON_ISSUES/#docker-container-doesnt-start","title":"Docker Container Doesn't Start","text":"

Initial setup issues are often caused by missing permissions or incorrectly mapped volumes. Always double-check your docker run or docker-compose.yml against the official setup guide before proceeding.

"},{"location":"COMMON_ISSUES/#permissions","title":"Permissions","text":"

Make sure your file permissions are correctly set:

  • If you encounter AJAX errors, cannot write to the database, or see an empty screen, check that permissions are correct and review the logs under /tmp/log.
  • To fix permission issues with the database, update the owner and group of app.db as described in the File Permissions guide.
"},{"location":"COMMON_ISSUES/#container-restarts-crashes","title":"Container Restarts / Crashes","text":"
  • Check the logs for details. Often, required settings are missing.
  • For more detailed troubleshooting, see Debug and Troubleshooting Tips.
  • To observe errors directly, run the container in the foreground instead of -d:
docker run --rm -it <your_image>\n
"},{"location":"COMMON_ISSUES/#docker-container-starts-but-the-application-misbehaves","title":"Docker Container Starts, But the Application Misbehaves","text":"

If the container starts but the app shows unexpected behavior, the cause is often data corruption, incorrect configuration, or unexpected input data.

"},{"location":"COMMON_ISSUES/#continuous-loading-screen","title":"Continuous \"Loading...\" Screen","text":"

A misconfigured application may display a persistent Loading... dialog. This is usually caused by the backend failing to start.

Steps to troubleshoot:

  1. Check Maintenance \u2192 Logs for exceptions.
  2. If no exception is visible, check the Portainer logs.
  3. Start the container in the foreground to observe exceptions.
  4. Enable trace or debug logging for detailed output (see Debug Tips).
  5. Verify that GRAPHQL_PORT is correctly configured.
  6. Check browser logs (press F12):

  7. Console tab \u2192 refresh the page

  8. Network tab \u2192 refresh the page

If you are unsure how to resolve errors, provide screenshots or log excerpts in your issue report or Discord discussion.

"},{"location":"COMMON_ISSUES/#common-configuration-issues","title":"Common Configuration Issues","text":""},{"location":"COMMON_ISSUES/#incorrect-scan_subnets","title":"Incorrect SCAN_SUBNETS","text":"

If SCAN_SUBNETS is misconfigured, you may see only a few devices in your device list after a scan. See the Subnets Documentation for proper configuration.

"},{"location":"COMMON_ISSUES/#duplicate-devices-and-notifications","title":"Duplicate Devices and Notifications","text":"
  • Devices are identified by their MAC address.
  • If a device's MAC changes, it will be treated as a new device, triggering notifications.
  • Prevent this by adjusting your device configuration for Android, iOS, or Windows. See the Random MACs Guide.
"},{"location":"COMMON_ISSUES/#unable-to-resolve-host","title":"Unable to Resolve Host","text":"
  • Ensure SCAN_SUBNETS uses the correct mask and --interface.
  • Refer to the Subnets Documentation for detailed guidance.
"},{"location":"COMMON_ISSUES/#invalid-json-errors","title":"Invalid JSON Errors","text":"
  • Follow the steps in Invalid JSON Errors Debug Help.
"},{"location":"COMMON_ISSUES/#sudo-execution-fails-eg-on-arpscan-on-raspberry-pi-4","title":"Sudo Execution Fails (e.g., on arpscan on Raspberry Pi 4)","text":"

Error:

sudo: unexpected child termination condition: 0\n

Resolution:

wget ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.3-2_armhf.deb\nsudo dpkg -i libseccomp2_2.5.3-2_armhf.deb\n

\u26a0\ufe0f The link may break over time. Check Debian Packages for the latest version.

"},{"location":"COMMON_ISSUES/#only-router-and-own-device-show-up","title":"Only Router and Own Device Show Up","text":"
  • Verify the subnet and interface in SCAN_SUBNETS.
  • On devices with multiple Ethernet ports, you may need to change eth0 to the correct interface.
"},{"location":"COMMON_ISSUES/#losing-settings-or-devices-after-update","title":"Losing Settings or Devices After Update","text":"
  • Ensure /data/db and /data/config are mapped to persistent storage.
  • Without persistent volumes, these folders are recreated on every update.
  • See Docker Volumes Setup for proper configuration.
"},{"location":"COMMON_ISSUES/#application-performance-issues","title":"Application Performance Issues","text":"

Slowness can be caused by:

  • Incorrect settings (causing app restarts) \u2192 check app.log.
  • Too many background processes \u2192 disable unnecessary scanners.
  • Long scans \u2192 limit the number of scanned devices.
  • Excessive disk operations or failing maintenance plugins.

See Performance Tips for detailed optimization steps.

"},{"location":"COMMON_ISSUES/#ip-flipping","title":"IP flipping","text":"

With ARPSCAN scans some devices might flip IP addresses after each scan triggering false notifications. This is because some devices respond to broadcast calls and thus different IPs after scans are logged.

See how to prevent IP flipping in the ARPSCAN plugin guide.

Alternatively adjust your notification settings to prevent false positives by filtering out events or devices.

"},{"location":"COMMUNITY_GUIDES/","title":"Community Guides","text":"

Use the official installation guides at first and use community content as supplementary material. Open an issue or PR if you'd like to add your link to the list \ud83d\ude4f (Ordered by last update time)

  • \u25b6 Discover & Monitor Your Network with This Self-Hosted Open Source Tool - Lawrence Systems (June 2025)
  • \u25b6 Home Lab Network Monitoring - Scotti-BYTE Enterprise Consulting Services (July 2024)
  • \ud83d\udcc4 How to Install NetAlertX on Your Synology NAS - Marius hosting (Updated frequently)
  • \ud83d\udcc4 Using the PiAlert Network Security Scanner on a Raspberry Pi - PiMyLifeUp
  • \u25b6 How to Setup Pi.Alert on Your Synology NAS - Digital Aloha
  • \ud83d\udcc4 \u9632\u8e6d\u7f51\u795e\u5668\uff0c\u7f51\u7edc\u5b89\u5168\u52a9\u624b | \u6781\u7a7a\u95f4\u90e8\u7f72\u7f51\u7edc\u626b\u63cf\u548c\u901a\u77e5\u7cfb\u7edf\u300eNetAlertX\u300f
  • \ud83d\udcc4 \uc2dc\ub180/\ud5e4\ub180\uc5d0\uc11c \ub124\ud2b8\uc6cc\ud06c \uc2a4\uce90\ub108 Pi.Alert Docker\ub85c \uc124\uce58 \ubc0f \uc0ac\uc6a9\ud558\uae30 (July 2023)
  • \ud83d\udcc4 \u7f51\u7edc\u5165\u4fb5\u63a2\u6d4b\u5668Pi.Alert (Chinese) (May 2023)
  • \u25b6 Pi.Alert auf Synology & Docker by - J\u00fcrgen Barth (March 2023)
  • \u25b6 Top Docker Container for Home Server Security - VirtualizationHowto (March 2023)
  • \u25b6 Pi.Alert or WatchYourLAN can alert you to unknown devices appearing on your WiFi or LAN network - Danie van der Merwe (November 2022)
"},{"location":"CUSTOM_PROPERTIES/","title":"Custom Properties for Devices","text":""},{"location":"CUSTOM_PROPERTIES/#overview","title":"Overview","text":"

This functionality allows you to define custom properties for devices, which can store and display additional information on the device listing page. By marking properties as \"Show\", you can enhance the user interface with quick actions, notes, or external links.

"},{"location":"CUSTOM_PROPERTIES/#key-features","title":"Key Features:","text":"
  • Customizable Properties: Define specific properties for each device.
  • Visibility Control: Choose which properties are displayed on the device listing page.
  • Interactive Elements: Include actions like links, modals, and device management directly in the interface.
"},{"location":"CUSTOM_PROPERTIES/#defining-custom-properties","title":"Defining Custom Properties","text":"

Custom properties are structured as a list of objects, where each property includes the following fields:

Field Description CUSTPROP_icon The icon (Base64-encoded HTML) displayed for the property. CUSTPROP_type The action type (e.g., show_notes, link, delete_dev). CUSTPROP_name A short name or title for the property. CUSTPROP_args Arguments for the action (e.g., URL or modal text). CUSTPROP_notes Additional notes or details displayed when applicable. CUSTPROP_show A boolean to control visibility (true to show on the listing page)."},{"location":"CUSTOM_PROPERTIES/#available-action-types","title":"Available Action Types","text":"
  • Show Notes: Displays a modal with a title and additional notes.
  • Example: Show firmware details or custom messages.
  • Link: Redirects to a specified URL in the current browser tab. (Arguments Needs to contain the full URL.)
  • Link (New Tab): Opens a specified URL in a new browser tab. (Arguments Needs to contain the full URL.)
  • Delete Device: Deletes the device using its MAC address.
  • Run Plugin: Placeholder for executing custom plugins (not implemented yet).
"},{"location":"CUSTOM_PROPERTIES/#usage-on-the-device-listing-page","title":"Usage on the Device Listing Page","text":"

Visible properties (CUSTPROP_show: true) are displayed as interactive icons in the device listing. Each icon can perform one of the following actions based on the CUSTPROP_type:

  1. Modals (e.g., Show Notes):
  2. Displays detailed information in a popup modal.
  3. Example: Firmware version details.

  4. Links:

  5. Redirect to an external or internal URL.
  6. Example: Open a device's documentation or external site.

  7. Device Actions:

  8. Manage devices with actions like delete.
  9. Example: Quickly remove a device from the network.

  10. Plugins:

  11. Future placeholder for running custom plugin scripts.
  12. Note: Not implemented yet.
"},{"location":"CUSTOM_PROPERTIES/#example-use-cases","title":"Example Use Cases","text":"
  1. Device Documentation Link:
  2. Add a custom property with CUSTPROP_type set to link or link_new_tab to allow quick navigation to the external documentation of the device.

  3. Firmware Details:

  4. Use CUSTPROP_type: show_notes to display firmware versions or upgrade instructions in a modal.

  5. Device Removal:

  6. Enable device removal functionality using CUSTPROP_type: delete_dev.
"},{"location":"CUSTOM_PROPERTIES/#notes","title":"Notes","text":"
  • Plugin Functionality: The run_plugin action type is currently not implemented and will show an alert if used.
  • Custom Icons (Experimental \ud83e\uddea): Use Base64-encoded HTML to provide custom icons for each property. You can add your icons in Setttings via the CUSTPROP_icon settings
  • Visibility Control: Only properties with CUSTPROP_show: true will appear on the listing page.

This feature provides a flexible way to enhance device management and display with interactive elements tailored to your needs.

"},{"location":"DATABASE/","title":"A high-level description of the database structure","text":"

An overview of the most important database tables as well as an detailed overview of the Devices table. The MAC address is used as a foreign key in most cases.

"},{"location":"DATABASE/#devices-database-table","title":"Devices database table","text":"Field Name Description Sample Value devMac MAC address of the device. 00:1A:2B:3C:4D:5E devName Name of the device. iPhone 12 devOwner Owner of the device. John Doe devType Type of the device (e.g., phone, laptop, etc.). If set to a network type (e.g., switch), it will become selectable as a Network Parent Node. Laptop devVendor Vendor/manufacturer of the device. Apple devFavorite Whether the device is marked as a favorite. 1 devGroup Group the device belongs to. Home Devices devComments User comments or notes about the device. Used for work purposes devFirstConnection Timestamp of the device's first connection. 2025-03-22 12:07:26+11:00 devLastConnection Timestamp of the device's last connection. 2025-03-22 12:07:26+11:00 devLastIP Last known IP address of the device. 192.168.1.5 devStaticIP Whether the device has a static IP address. 0 devScan Whether the device should be scanned. 1 devLogEvents Whether events related to the device should be logged. 0 devAlertEvents Whether alerts should be generated for events. 1 devAlertDown Whether an alert should be sent when the device goes down. 0 devSkipRepeated Whether to skip repeated alerts for this device. 1 devLastNotification Timestamp of the last notification sent for this device. 2025-03-22 12:07:26+11:00 devPresentLastScan Whether the device was present during the last scan. 1 devIsNew Whether the device is marked as new. 0 devLocation Physical or logical location of the device. Living Room devIsArchived Whether the device is archived. 0 devParentMAC MAC address of the parent device (if applicable) to build the Network Tree. 00:1A:2B:3C:4D:5F devParentPort Port of the parent device to which this device is connected. Port 3 devIcon Icon representing the device. The value is a base64-encoded SVG or Font Awesome HTML tag. PHN2ZyB... devGUID Unique identifier for the device. a2f4b5d6-7a8c-9d10-11e1-f12345678901 devSite Site or location where the device is registered. Office devSSID SSID of the Wi-Fi network the device is connected to. HomeNetwork devSyncHubNode The NetAlertX node ID used for synchronization between NetAlertX instances. node_1 devSourcePlugin Source plugin that discovered the device. ARPSCAN devCustomProps Custom properties related to the device. The value is a base64-encoded JSON object. PHN2ZyB... devFQDN Fully qualified domain name. raspberrypi.local devParentRelType The type of relationship between the current device and it's parent node. By default, selecting nic will hide it from lists. nic devReqNicsOnline If all NICs are required to be online to mark teh current device online. 0

To understand how values of these fields influuence application behavior, such as Notifications or Network topology, see also:

  • Device Management
  • Network Tree Topology Setup
  • Notifications
"},{"location":"DATABASE/#other-tables-overview","title":"Other Tables overview","text":"Table name Description Sample data CurrentScan Result of the current scan Devices The main devices database that also contains the Network tree mappings. If ScanCycle is set to 0 device is not scanned. Events Used to collect connection/disconnection events. Online_History Used to display the Device presence chart Parameters Used to pass values between the frontend and backend. Plugins_Events For capturing events exposed by a plugin via the last_result.log file. If unique then saved into the Plugins_Objects table. Entries are deleted once processed and stored in the Plugins_History and/or Plugins_Objects tables. Plugins_History History of all entries from the Plugins_Events table Plugins_Language_Strings Language strings collected from the plugin config.json files used for string resolution in the frontend. Plugins_Objects Unique objects detected by individual plugins. Sessions Used to display sessions in the charts Settings Database representation of the sum of all settings from app.conf and plugins coming from config.json files."},{"location":"DEBUG_API_SERVER/","title":"Debugging GraphQL server issues","text":"

The GraphQL server is an API middle layer, running on it's own port specified by GRAPHQL_PORT, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the API documentation for details.

The most common issue is that the GraphQL server doesn't start properly, usually due to a port conflict. If you are running multiple NetAlertX instances, make sure to use unique ports by changing the GRAPHQL_PORT setting. The default is 20212.

"},{"location":"DEBUG_API_SERVER/#how-to-update-the-graphql_port-in-case-of-issues","title":"How to update the GRAPHQL_PORT in case of issues","text":"

As a first troubleshooting step try changing the default GRAPHQL_PORT setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.

"},{"location":"DEBUG_API_SERVER/#updating-the-setting-via-the-settings-ui","title":"Updating the setting via the Settings UI","text":"

Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:

You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The API_TOKEN is used to authenticate any API calls, including GraphQL requests.

"},{"location":"DEBUG_API_SERVER/#updating-the-appconf-file","title":"Updating the app.conf file","text":"

If the UI is not accessible, you can directly edit the app.conf file in your /config folder:

"},{"location":"DEBUG_API_SERVER/#using-a-docker-variable","title":"Using a docker variable","text":"

All application settings can also be initialized via the APP_CONF_OVERRIDE docker env variable.

...\n environment:\n      - PORT=20213\n      - APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"}\n...\n
"},{"location":"DEBUG_API_SERVER/#how-to-check-the-graphql-server-is-running","title":"How to check the GraphQL server is running?","text":"

There are several ways to check if the GraphQL server is running.

"},{"location":"DEBUG_API_SERVER/#init-check","title":"Init Check","text":"

You can navigate to Maintenance -> Init Check to see if isGraphQLServerRunning is ticked:

"},{"location":"DEBUG_API_SERVER/#checking-the-logs","title":"Checking the Logs","text":"

You can navigate to Maintenance -> Logs and search for graphql to see if it started correctly and serving requests:

"},{"location":"DEBUG_API_SERVER/#inspecting-the-browser-console","title":"Inspecting the Browser console","text":"

In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).

You can then inspect any of the POST requests by opening them in a new tab.

"},{"location":"DEBUG_INVALID_JSON/","title":"How to debug the Invalid JSON response error","text":"

Check the the HTTP response of the failing backend call by following these steps:

  • Open developer console in your browser (usually, e. g. for Chrome, key F12 on the keyboard).
  • Follow the steps in this screenshot:

  • Copy the URL causing the error and enter it in the address bar of your browser directly and hit enter. The copied URLs could look something like this (notice the query strings at the end):
  • http://<server>:20211/api/table_devices.json?nocache=1704141103121
  • http://<server>:20211/php/server/devices.php?action=getDevicesTotals

  • Post the error response in the existing issue thread on GitHub or create a new issue and include the redacted response of the failing query.

For reference, the above queries should return results in the following format:

"},{"location":"DEBUG_INVALID_JSON/#first-url","title":"First URL:","text":"
  • Should yield a valid JSON file
"},{"location":"DEBUG_INVALID_JSON/#second-url","title":"Second URL:","text":""},{"location":"DEBUG_INVALID_JSON/#third-url","title":"Third URL:","text":"

You can copy and paste any JSON result (result of the First and Third query) into an online JSON checker, such as this one to check if it's valid.

"},{"location":"DEBUG_PHP/","title":"Debugging backend PHP issues","text":""},{"location":"DEBUG_PHP/#logs-in-ui","title":"Logs in UI","text":"

You can view recent backend PHP errors directly in the Maintenance > Logs section of the UI. This provides quick access to logs without needing terminal access.

"},{"location":"DEBUG_PHP/#accessing-logs-directly","title":"Accessing logs directly","text":"

Sometimes, the UI might not be accessible. In that case, you can access the logs directly inside the container.

"},{"location":"DEBUG_PHP/#step-by-step","title":"Step-by-step:","text":"
  1. Open a shell into the container:

bash docker exec -it netalertx /bin/sh

  1. Check the NGINX error log:

bash cat /var/log/nginx/error.log

  1. Check the PHP application error log:

bash cat /tmp/log/app.php_errors.log

These logs will help identify syntax issues, fatal errors, or startup problems when the UI fails to load properly.

"},{"location":"DEBUG_PLUGINS/","title":"Troubleshooting plugins","text":"

Tip

Before troubleshooting, please ensure you have the right Debugging and LOG_LEVEL set.

"},{"location":"DEBUG_PLUGINS/#high-level-overview","title":"High-level overview","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/).

For a more in-depth overview on how plugins work check the Plugins development docs.

"},{"location":"DEBUG_PLUGINS/#prerequisites","title":"Prerequisites","text":"
  • Make sure you read and followed the specific plugin setup instructions.
  • Ensure you have debug enabled (see More Logging)
"},{"location":"DEBUG_PLUGINS/#potential-issues","title":"Potential issues","text":"
  • Bugs
  • Unexpected input (e.g. special characters in names)
  • Dependencies changed how data is output
"},{"location":"DEBUG_PLUGINS/#incorrect-input-data","title":"Incorrect input data","text":"

Input data from the plugin might cause mapping issues in specific edge cases. Look for a corresponding section in the app.log file, for example notice the first line of the execution run of the PIHOLE plugin below:

17:31:05 [Scheduler] - Scheduler run for PIHOLE: YES\n17:31:05 [Plugin utils] ---------------------------------------------\n17:31:05 [Plugin utils] display_name: PiHole (Device sync)\n17:31:05 [Plugins] CMD: SELECT n.hwaddr AS Object_PrimaryID, {s-quote}null{s-quote} AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, {s-quote}null{s-quote} AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE {s-quote}ip-%{s-quote} AND n.hwaddr is not {s-quote}00:00:00:00:00:00{s-quote}  AND na.ip is not null\n17:31:05 [Plugins] setTyp: subnets\n17:31:05 [Plugin utils] Flattening the below array\n17:31:05 ['192.168.1.0/24 --interface=eth1']\n17:31:05 [Plugin utils] isinstance(arr, list) : False | isinstance(arr, str) : True\n17:31:05 [Plugins] Resolved value: 192.168.1.0/24 --interface=eth1\n17:31:05 [Plugins] Convert to Base64: True\n17:31:05 [Plugins] base64 value: b'MTkyLjE2OC4xLjAvMjQgLS1pbnRlcmZhY2U9ZXRoMQ=='\n17:31:05 [Plugins] Timeout: 10\n17:31:05 [Plugins] Executing: SELECT n.hwaddr AS Object_PrimaryID, 'null' AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, 'null' AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE 'ip-%' AND n.hwaddr is not '00:00:00:00:00:00'  AND na.ip is not null\n\ud83d\udd3b\n17:31:05 [Plugins] SUCCESS, received 2 entries\n17:31:05 [Plugins] sqlParam entries: [(0, 'PIHOLE', '01:01:01:01:01:01', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'not-processed', 'null', 'null', '01:01:01:01:01:01'), (0, 'PIHOLE', '02:42:ac:1e:00:02', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'not-processed', 'null', 'null', '02:42:ac:1e:00:02')]\n17:31:05 [Plugins] Processing        : PIHOLE\n17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status \"watched-not-changed\"\n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"missing-in-last-scan\"\n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"watched-not-changed\"\n17:31:05 [Plugins] Mapping objects to database table: CurrentScan\n17:31:05 [Plugins] SQL query for mapping: INSERT into CurrentScan ( \"cur_MAC\", \"cur_IP\", \"cur_LastQuery\", \"cur_Name\", \"cur_Vendor\", \"cur_ScanMethod\") VALUES ( ?, ?, ?, ?, ?, ?)\n17:31:05 [Plugins] SQL sqlParams for mapping: [('01:01:01:01:01:01', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'PIHOLE'), ('02:42:ac:1e:00:02', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'PIHOLE')]\n\ud83d\udd3a\n17:31:05 [API] Update API starting\n17:31:06 [API] Updating table_plugins_history.json file in /api\n

The debug output between the \ud83d\udd3bred arrows\ud83d\udd3a is important for debugging (arrows added only to highlight the section on this page, they are not available in the actual debug log)

In the above output notice the section logging how many events are produced by the plugin:

17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n

These values, if formatted correctly, will also show up in the UI:

"},{"location":"DEBUG_PLUGINS/#sharing-application-state","title":"Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/","title":"Debugging and troubleshooting","text":"

Please follow tips 1 - 4 to get a more detailed error.

"},{"location":"DEBUG_TIPS/#1-more-logging","title":"1. More Logging","text":"

When debugging an issue always set the highest log level:

LOG_LEVEL='trace'

"},{"location":"DEBUG_TIPS/#2-surfacing-errors-when-container-restarts","title":"2. Surfacing errors when container restarts","text":"

Start the container via the terminal with a command similar to this one:

docker run \\\n  --network=host \\\n  --restart unless-stopped \\\n  -v /local_data_dir:/data \\\n  -v /etc/localtime:/etc/localtime:ro \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  -e PORT=20211 \\\n  -e APP_CONF_OVERRIDE='{\"GRAPHQL_PORT\":\"20214\"}' \\\n  ghcr.io/jokob-sk/netalertx:latest\n\n

Note: Your /local_data_dir should contain a config and db folder.

Note

\u26a0 The most important part is NOT to use the -d parameter so you see the error when the container crashes. Use this error in your issue description.

"},{"location":"DEBUG_TIPS/#3-check-the-_dev-image-and-open-issues","title":"3. Check the _dev image and open issues","text":"

If possible, check if your issue got fixed in the _dev image before opening a new issue. The container is:

ghcr.io/jokob-sk/netalertx-dev:latest

\u26a0 Please backup your DB and config beforehand!

Please also search open issues.

"},{"location":"DEBUG_TIPS/#4-disable-restart-behavior","title":"4. Disable restart behavior","text":"

To prevent a Docker container from automatically restarting in a Docker Compose file, specify the restart policy as no:

version: '3'\n\nservices:\n  your-service:\n    image: your-image:tag\n    restart: no\n    # Other service configurations...\n
"},{"location":"DEBUG_TIPS/#5-tmp-mount-directories-to-rule-host-out-permission-issues","title":"5. TMP mount directories to rule host out permission issues","text":"

Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server. See teh Permissions guide for details.

"},{"location":"DEBUG_TIPS/#6-sharing-application-state","title":"6. Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/#common-issues","title":"Common issues","text":"

See Common issues for additional troubleshooting tips.

"},{"location":"DEVICES_BULK_EDITING/","title":"Editing multiple devices at once","text":"

NetAlertX allows you to mass-edit devices via a CSV export and import feature, or directly in the UI.

"},{"location":"DEVICES_BULK_EDITING/#ui-multi-edit","title":"UI multi edit","text":"

Note

Make sure you have your backups saved and restorable before doing any mass edits. Check Backup strategies.

You can select devices in the Devices view by selecting devices to edit and then clicking the Multi-edit button or via the Maintenance > Multi-Edit section.

"},{"location":"DEVICES_BULK_EDITING/#csv-bulk-edit","title":"CSV bulk edit","text":"

The database and device structure may change with new releases. When using the CSV import functionality, ensure the format matches what the application expects. To avoid issues, you can first export the devices and review the column formats before importing any custom data.

Note

As always, backup everything, just in case.

  1. In Maintenance > Backup / Restore click the CSV Export button.
  2. A devices.csv is generated in the /config folder
  3. Edit the devices.csv file however you like.

Note

The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: <server>:20211/php/server/devices.php?action=ExportCSV or via the CSV Backup plugin. (\ud83d\udca1 You can schedule this)

"},{"location":"DEVICES_BULK_EDITING/#file-encoding-format","title":"File encoding format","text":"

Note

Keep Linux line endings (suggested editors: Nano, Notepad++)

"},{"location":"DEVICE_DISPLAY_SETTINGS/","title":"Device Display Settings","text":"

This set of settings allows you to group Devices under different views. The Archived toggle allows you to exclude a Device from most listings and notifications.

"},{"location":"DEVICE_DISPLAY_SETTINGS/#status-colors","title":"Status Colors","text":"
  1. \ud83d\udd0c Online (Green) = A device that is no longer marked as a \"New Device\".
  2. \ud83d\udd0c New (Green) = A newly discovered device that is online and is still marked as a \"New Device\".
  3. \u2716 New (Grey) = Same as No.2 but device is now offline.
  4. \u2716 Offline (Grey) = A device that was not detected online in the last scan.
  5. \u26a0 Down (Red) = A device that has \"Alert Down\" marked and has been offline for the time set in the Setting NTFPRCS_alert_down_time.

See also Notification guide.

"},{"location":"DEVICE_HEURISTICS/","title":"Device Heuristics: Icon and Type Guessing","text":"

This module is responsible for inferring the most likely device type and icon based on minimal identifying data like MAC address, vendor, IP, or device name.

It does this using a set of heuristics defined in an external JSON rules file, which it evaluates in priority order.

Note

You can find the full source code of the heuristics module in the device_heuristics.py file.

"},{"location":"DEVICE_HEURISTICS/#json-rule-format","title":"JSON Rule Format","text":"

Rules are defined in a file called device_heuristics_rules.json (located under /back), structured like:

[\n  {\n    \"dev_type\": \"Phone\",\n    \"icon_html\": \"<i class=\\\"fa-brands fa-apple\\\"></i>\",\n    \"matching_pattern\": [\n      { \"mac_prefix\": \"001A79\", \"vendor\": \"Apple\" }\n    ],\n    \"name_pattern\": [\"iphone\", \"pixel\"]\n  }\n]\n

Note

Feel free to raise a PR in case you'd like to add any rules into the device_heuristics_rules.json file. Please place new rules into the correct position and consider the priority of already available rules.

"},{"location":"DEVICE_HEURISTICS/#supported-fields","title":"Supported fields:","text":"Field Type Description dev_type string Type to assign if rule matches (e.g. \"Gateway\", \"Phone\") icon_html string Icon (HTML string) to assign if rule matches. Encoded to base64 at load time. matching_pattern array List of { mac_prefix, vendor } objects for first strict and then loose matching name_pattern array (optional) List of lowercase substrings (used with regex) ip_pattern array (optional) Regex patterns to match IPs

Order in this array defines priority \u2014 rules are checked top-down and short-circuit on first match.

"},{"location":"DEVICE_HEURISTICS/#matching-flow-in-priority-order","title":"Matching Flow (in Priority Order)","text":"

The function guess_device_attributes(...) runs a series of matching functions in strict order:

  1. MAC + Vendor \u2192 match_mac_and_vendor()
  2. Vendor only \u2192 match_vendor()
  3. Name pattern \u2192 match_name()
  4. IP pattern \u2192 match_ip()
  5. Final fallback \u2192 defaults defined in the NEWDEV_devIcon and NEWDEV_devType settings.

Note

The app will try guessing the device type or icon if devType or devIcon are \"\" or \"null\".

"},{"location":"DEVICE_HEURISTICS/#use-of-default-values","title":"Use of default values","text":"

The guessing process runs for every device as long as the current type or icon still matches the default values. Even if earlier heuristics return a match, the system continues evaluating additional clues \u2014 like name or IP \u2014 to try and replace placeholders.

# Still considered a match attempt if current values are defaults\nif (not type_ or type_ == default_type) or (not icon or icon == default_icon):\n    type_, icon = match_ip(ip, default_type, default_icon)\n

In other words: if the type or icon is still \"unknown\" (or matches the default), the system assumes the match isn\u2019t final \u2014 and keeps looking. It stops only when both values are non-default (defaults are defined in the NEWDEV_devIcon and NEWDEV_devType settings).

"},{"location":"DEVICE_HEURISTICS/#match-behavior-per-function","title":"Match Behavior (per function)","text":"

These functions are executed in the following order:

"},{"location":"DEVICE_HEURISTICS/#match_mac_and_vendormac_clean-vendor","title":"match_mac_and_vendor(mac_clean, vendor, ...)","text":"
  • Looks for MAC prefix and vendor substring match
  • Most precise
  • Stops as soon as a match is found
"},{"location":"DEVICE_HEURISTICS/#match_vendorvendor","title":"match_vendor(vendor, ...)","text":"
  • Falls back to substring match on vendor only
  • Ignores rules where mac_prefix is present (ensures this is really a fallback)
"},{"location":"DEVICE_HEURISTICS/#match_namename","title":"match_name(name, ...)","text":"
  • Lowercase name is compared against all name_pattern values using regex
  • Good for user-assigned labels (e.g. \"AP Office\", \"iPhone\")
"},{"location":"DEVICE_HEURISTICS/#match_ipip","title":"match_ip(ip, ...)","text":"
  • If IP is present and matches regex patterns under any rule, it returns that type/icon
  • Usually used for gateways or local IP ranges
"},{"location":"DEVICE_HEURISTICS/#icons","title":"Icons","text":"
  • Each rule can define an icon_html, which is converted to a icon_base64 on load
  • If missing, it falls back to the passed-in default_icon (NEWDEV_devIcon setting)
  • If a match is found but icon is still blank, default is used

TL;DR: Type and icon must both be matched. If only one is matched, the other falls back to the default.

"},{"location":"DEVICE_HEURISTICS/#priority-mechanics","title":"Priority Mechanics","text":"
  • JSON rules are evaluated top-to-bottom
  • Matching is first-hit wins \u2014 no scoring, no weights
  • Rules that are more specific (e.g. exact MAC prefixes) should be listed earlier
"},{"location":"DEVICE_MANAGEMENT/","title":"Device Management","text":"

The Main Info section is where most of the device identifiable information is stored and edited. Some of the information is autodetected via various plugins. Initial values for most of the fields can be specified in the NEWDEV plugin.

Note

You can multi-edit devices by selecting them in the main Devices view, from the Mainetence section, or via the CSV Export functionality under Maintenance. More info can be found in the Devices Bulk-editing docs.

"},{"location":"DEVICE_MANAGEMENT/#main-info","title":"Main Info","text":"
  • MAC: MAC addres of the device. Not editable, unless creating a new dummy device.
  • Last IP: IP addres of the device. Not editable, unless creating a new dummy device.
  • Name: Friendly device name. Autodetected via various \ud83c\udd8e Name discovery plugins. The app attaches (IP match) if the name is discovered via an IP match and not MAC match which could mean the name could be incorrect as IPs might change.
  • Icon: Partially autodetected. Select an existing or add a custom icon. You can also auto-apply the same icon on all devices of the same type.
  • Owner: Device owner (The list is self-populated with existing owners and you can add custom values).
  • Type: Select a device type from the dropdown list (Smartphone, Tablet, Laptop, TV, router, etc.) or add a new device type. If you want the device to act as a Network device (and be able to be a network node in the Network view), select a type under Network Devices or add a new Network Device type in Settings. More information can be found in the Network Setup docs.
  • Vendor: The manufacturing vendor. Automatically updated by NetAlertX when empty or unknown, can be edited.
  • Group: Select a group (Always on, Personal, Friends, etc.) or type your own Group name.
  • Location: Select the location, usually a room, where the device is located (Kitchen, Attic, Living room, etc.) or add a custom Location.
  • Comments: Add any comments for the device, such as a serial number, or maintenance information.

Note

Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.

"},{"location":"DEVICE_MANAGEMENT/#dummy-devices","title":"Dummy devices","text":"

You can create dummy devices from the Devices listing screen.

The MAC field and the Last IP field will then become editable.

Note

You can couple this with the ICMP plugin which can be used to monitor the status of these devices, if they are actual devices reachable with the ping command. If not, you can use a loopback IP address so they appear online, such as 0.0.0.0 or 127.0.0.1.

"},{"location":"DEVICE_MANAGEMENT/#copying-data-from-an-existing-device","title":"Copying data from an existing device.","text":"

To speed up device population you can also copy data from an existing device. This can be done from the Tools tab on the Device details.

"},{"location":"DEV_DEVCONTAINER/","title":"Devcontainer for NetAlertX Guide","text":"

This devcontainer is designed to mirror the production container environment as closely as possible, while providing a rich set of tools for development.

"},{"location":"DEV_DEVCONTAINER/#how-to-get-started","title":"How to Get Started","text":"
  1. Prerequisites:

    • A working Docker installation that can be managed by your user. This can be Docker Desktop or Docker Engine installed via other methods (like the official get-docker script).
    • Visual Studio Code installed.
    • The VS Code Dev Containers extension installed.
  2. Launch the Devcontainer:

    • Clone this repository.
    • Open the repository folder in VS Code.
    • A notification will pop up in the bottom-right corner asking to \"Reopen in Container\". Click it.
    • VS Code will now build the Docker image and connect your editor to the container. Your terminal, debugger, and all tools will now be running inside this isolated environment.
"},{"location":"DEV_DEVCONTAINER/#key-workflows-features","title":"Key Workflows & Features","text":"

Once you're inside the container, everything is set up for you.

"},{"location":"DEV_DEVCONTAINER/#1-services-frontend-backend","title":"1. Services (Frontend & Backend)","text":"

The container's startup script (.devcontainer/scripts/setup.sh) automatically starts the Nginx/PHP frontend and the Python backend. You can restart them at any time using the built-in tasks.

"},{"location":"DEV_DEVCONTAINER/#2-integrated-debugging-just-press-f5","title":"2. Integrated Debugging (Just Press F5!)","text":"

Debugging for both the Python backend and PHP frontend is pre-configured and ready to go.

  • Python Backend (debugpy): The backend automatically starts with a debugger attached on port 5678. Simply open a Python file (e.g., server/__main__.py), set a breakpoint, and press F5 (or select \"Python Backend Debug: Attach\") to connect the debugger.
  • PHP Frontend (Xdebug): Xdebug listens on port 9003. In VS Code, start listening for Xdebug connections and use a browser extension (like \"Xdebug helper\") to start a debugging session for the web UI.
"},{"location":"DEV_DEVCONTAINER/#3-common-tasks-f1-run-task","title":"3. Common Tasks (F1 -> Run Task)","text":"

We've created several VS Code Tasks to simplify common operations. Access them by pressing F1 and typing \"Tasks: Run Task\".

  • Generate Dockerfile: This is important. The actual .devcontainer/Dockerfile is auto-generated. If you need to change the container environment, edit .devcontainer/resources/devcontainer-Dockerfile and then run this task.
  • Re-Run Startup Script: Manually re-runs the .devcontainer/scripts/setup.sh script to re-link files and restart services.
  • Start Backend (Python) / Start Frontend (nginx and PHP-FPM): Manually restart the services if needed.
"},{"location":"DEV_DEVCONTAINER/#4-running-tests","title":"4. Running Tests","text":"

The environment includes pytest. You can run tests directly from the VS Code Test Explorer UI or by running pytest -q in the integrated terminal. The necessary PYTHONPATH is already configured so that tests can correctly import the server modules.

"},{"location":"DEV_DEVCONTAINER/#how-to-maintain-this-devcontainer","title":"How to Maintain This Devcontainer","text":"

The setup is designed to be easy to manage. Here are the core principles:

  • Don't Edit Dockerfile Directly: The main .devcontainer/Dockerfile is a combination of the project's root Dockerfile and a special dev-only stage. To add new tools or dependencies, edit .devcontainer/resources/devcontainer-Dockerfile and then run the Generate Dockerfile task.
  • Build-Time vs. Run-Time Setup:
    • For changes that can be baked into the image (like installing a new package with apk add), add them to the resource Dockerfile.
    • For changes that must happen when the container starts (like creating symlinks, setting permissions, or starting services), use .devcontainer/scripts/setup.sh.
  • Project Conventions: The .github/copilot-instructions.md file is an excellent resource to help AI and humans understand the project's architecture, conventions, and how to use existing helper functions instead of hardcoding values.

This setup provides a powerful and consistent foundation for all current and future contributors to NetAlertX.

"},{"location":"DEV_ENV_SETUP/","title":"Development Environment Setup","text":"

I truly appreciate all contributions! To help keep this project maintainable, this guide provides an overview of project priorities, key design considerations, and overall philosophy. It also includes instructions for setting up your environment so you can start contributing right away.

"},{"location":"DEV_ENV_SETUP/#development-guidelines","title":"Development Guidelines","text":"

Before starting development, please review the following guidelines.

"},{"location":"DEV_ENV_SETUP/#priority-order-highest-to-lowest","title":"Priority Order (Highest to Lowest)","text":"
  1. \ud83d\udd3c Fixing core bugs that lack workarounds
  2. \ud83d\udd35 Adding core functionality that unlocks other features (e.g., plugins)
  3. \ud83d\udd35 Refactoring to enable faster development
  4. \ud83d\udd3d UI improvements (PRs welcome, but low priority)
"},{"location":"DEV_ENV_SETUP/#design-philosophy","title":"Design Philosophy","text":"

The application architecture is designed for extensibility and maintainability. It relies heavily on configuration manifests via plugins and settings to dynamically build the UI and populate the application with data from various sources.

For details, see: - Plugins Development (includes video) - Settings System

Focus on core functionality and integrate with existing tools rather than reinventing the wheel.

Examples: - Using Apprise for notifications instead of implementing multiple separate gateways - Implementing regex-based validation instead of one-off validation for each setting

Note

UI changes have lower priority. PRs are welcome, but please keep them small and focused.

"},{"location":"DEV_ENV_SETUP/#development-environment-set-up","title":"Development Environment Set Up","text":"

Tip

There is also a ready to use devcontainer available.

The following steps will guide you to set up your environment for local development and to run a custom docker build on your system. For most changes the container doesn't need to be rebuild which speeds up the development significantly.

Note

Replace /development with the path where your code files will be stored. The default container name is netalertx so there might be a conflict with your running containers.

"},{"location":"DEV_ENV_SETUP/#1-download-the-code","title":"1. Download the code:","text":"
  • mkdir /development
  • cd /development && git clone https://github.com/jokob-sk/NetAlertX.git
"},{"location":"DEV_ENV_SETUP/#2-create-a-dev-env_dev-file","title":"2. Create a DEV .env_dev file","text":"

touch /development/.env_dev && sudo nano /development/.env_dev

The file content should be following, with your custom values.

#--------------------------------\n#NETALERTX\n#--------------------------------\nPORT=22222    # make sure this port is unique on your whole network\nDEV_LOCATION=/development/NetAlertX\nAPP_DATA_LOCATION=/volume/docker_appdata\n# Make sure your GRAPHQL_PORT setting has a port that is unique on your whole host network\nAPP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"22223\"} \n# ALWAYS_FRESH_INSTALL=true # uncommenting this will always delete the content of /config and /db dirs on boot to simulate a fresh install\n
"},{"location":"DEV_ENV_SETUP/#3-create-db-and-config-dirs","title":"3. Create /db and /config dirs","text":"

Create a folder netalertx in the APP_DATA_LOCATION (in this example in /volume/docker_appdata) with 2 subfolders db and config.

  • mkdir /volume/docker_appdata/netalertx
  • mkdir /volume/docker_appdata/netalertx/db
  • mkdir /volume/docker_appdata/netalertx/config
"},{"location":"DEV_ENV_SETUP/#4-run-the-container","title":"4. Run the container","text":"
  • cd /development/NetAlertX && sudo docker-compose --env-file ../.env_dev

You can then modify the python script without restarting/rebuilding the container every time. Additionally, you can trigger a plugin run via the UI:

"},{"location":"DEV_ENV_SETUP/#tips","title":"Tips","text":"

A quick cheat sheet of useful commands.

"},{"location":"DEV_ENV_SETUP/#removing-the-container-and-image","title":"Removing the container and image","text":"

A command to stop, remove the container and the image (replace netalertx and netalertx-netalertx with the appropriate values)

  • sudo docker container stop netalertx ; sudo docker container rm netalertx ; sudo docker image rm netalertx-netalertx
"},{"location":"DEV_ENV_SETUP/#restart-the-server-backend","title":"Restart the server backend","text":"

Most code changes can be tested without rebuilding the container. When working on the python server backend, you only need to restart the server.

  1. You can usually restart the backend via Maintenance > Logs > Restart server

  1. If above doesn't work, SSH into the container and kill & restart the main script loop

  2. sudo docker exec -it netalertx /bin/bash

  3. pkill -f \"python /app/server\" && python /app/server &

  4. If none of the above work, restart the docker container.

  5. This is usually the last resort as sometimes the Docker engine becomes unresponsive and the whole engine needs to be restarted.

"},{"location":"DEV_ENV_SETUP/#contributing-pull-requests","title":"Contributing & Pull Requests","text":""},{"location":"DEV_ENV_SETUP/#before-submitting-a-pr-please-ensure","title":"Before submitting a PR, please ensure:","text":"

\u2714 Changes are backward-compatible with existing installs. \u2714 No unnecessary changes are made. \u2714 New features are reusable, not narrowly scoped. \u2714 Features are implemented via plugins if possible.

"},{"location":"DEV_ENV_SETUP/#mandatory-test-cases","title":"Mandatory Test Cases","text":"
  • Fresh install (no DB/config).
  • Existing DB/config compatibility.
  • Notification testing:

    • Email
    • Apprise (e.g., Telegram)
    • Webhook (e.g., Discord)
    • MQTT (e.g., Home Assistant)
  • Updating Settings and their persistence.

  • Updating a Device
  • Plugin functionality.
  • Error log inspection.

Note

Always run all available tests as per the Testing documentation.

"},{"location":"DEV_PORTS_HOST_MODE/","title":"Dev Ports in Host Network Mode","text":"

When using \"--network=host\" in the devcontainer, VS Code's normal port forwarding model doesn't apply. All container ports are already on the host network namespace, so:

  • Listing ports in forwardPorts can cause VS Code to pre-bind or reserve them (conflicts with startup scripts waiting for a free port).
  • The PORTS panel will not auto-detect services reliably, because forwarding isn't occurring.
  • Debugger ports (e.g. Xdebug 9003, Python debugpy 5678) can still be listed safely.
"},{"location":"DEV_PORTS_HOST_MODE/#recommended-pattern","title":"Recommended Pattern","text":"
  1. Only include debugger ports in forwardPorts: jsonc \"forwardPorts\": [5678, 9003]
  2. Do NOT list application service ports (e.g. 20211, 20212) there when in host mode.
  3. Use the helper task to enumerate current bindings:
  4. Run task: > Tasks: Run Task \u2192 [Dev Container] List NetAlertX Ports
"},{"location":"DEV_PORTS_HOST_MODE/#port-enumeration-script","title":"Port Enumeration Script","text":"

Script: scripts/list-ports.sh Outputs binding address, PID (if resolvable) and process name for key ports.

You can edit the PORTS variable inside that script to add/remove watched ports.

"},{"location":"DEV_PORTS_HOST_MODE/#xdebug-notes","title":"Xdebug Notes","text":"

Set in 99-xdebug.ini:

xdebug.client_host=127.0.0.1\nxdebug.client_port=9003\nxdebug.discover_client_host=1\n

Ensure your IDE is listening on 9003.

"},{"location":"DEV_PORTS_HOST_MODE/#troubleshooting","title":"Troubleshooting","text":"Symptom Cause Fix Waiting for port 20211 to free... repeats VS Code pre-bound the port via forwardPorts Remove the port from forwardPorts, rebuild, retry PHP request hangs at start Xdebug trying to connect to unresolved host (host.docker.internal) Use 127.0.0.1 or rely on discovery PORTS panel empty Expected in host mode Use the port enumeration task"},{"location":"DEV_PORTS_HOST_MODE/#future-improvements","title":"Future Improvements","text":"
  • Optional: add a small web status endpoint summarizing runtime ports.
  • Optional: detect host mode in setup.sh and skip the wait loop if the PID using port is the intended process.
"},{"location":"DOCKER_COMPOSE/","title":"NetAlertX and Docker Compose","text":"

Warning

\u26a0\ufe0f Important: The docker-compose has recently changed. Carefully read the Migration guide for detailed instructions.

Great care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.Good care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.

Note

The container needs to run in network_mode:\"host\" to access Layer 2 networking such as arp, nmap and others. Due to lack of support for this feature, Windows host is not a supported operating system.

"},{"location":"DOCKER_COMPOSE/#baseline-docker-compose","title":"Baseline Docker Compose","text":"

There is one baseline for NetAlertX. That's the default security-enabled official distribution.

services:\n  netalertx:\n  #use an environmental variable to set host networking mode if needed\n    container_name: netalertx                       # The name when you docker contiainer ls\n    image: ghcr.io/jokob-sk/netalertx-dev:latest\n    network_mode: ${NETALERTX_NETWORK_MODE:-host}   # Use host networking for ARP scanning and other services\n\n    read_only: true                                 # Make the container filesystem read-only\n    cap_drop:                                       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:                                        # Add only the necessary capabilities\n      - NET_ADMIN                                   # Required for ARP scanning\n      - NET_RAW                                     # Required for raw socket operations\n      - NET_BIND_SERVICE                            # Required to bind to privileged ports (nbtscan)\n\n    volumes:\n      - type: volume                                # Persistent Docker-managed named volume for config + database\n        source: netalertx_data\n        target: /data                               # `/data/config` and `/data/db` live inside this mount\n        read_only: false\n\n    # Example custom local folder called /home/user/netalertx_data\n    # - type: bind\n    #   source: /home/user/netalertx_data\n    #   target: /data\n    #   read_only: false\n    # ... or use the alternative format\n    # - /home/user/netalertx_data:/data:rw\n\n      - type: bind                                  # Bind mount for timezone consistency\n        source: /etc/localtime\n        target: /etc/localtime\n        read_only: true\n\n      # Mount your DHCP server file into NetAlertX for a plugin to access\n      # - path/on/host/to/dhcp.file:/resources/dhcp.file\n\n    # tmpfs mount consolidates writable state for a read-only container and improves performance\n    # uid=20211 and gid=20211 is the netalertx user inside the container\n    # mode=1700 grants rwx------ permissions to the netalertx user only\n    tmpfs:\n      # Comment out to retain logs between container restarts - this has a server performance impact.\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n\n      # Retain logs - comment out tmpfs /tmp if you want to retain logs between container restarts\n      # Please note if you remove the /tmp mount, you must create and maintain sub-folder mounts.\n      # - /path/on/host/log:/tmp/log\n      # - \"/tmp/api:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      # - \"/tmp/nginx:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      # - \"/tmp/run:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n\n    environment:\n      LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0}                   # Listen for connections on all interfaces\n      PORT: ${PORT:-20211}                                   # Application port\n      GRAPHQL_PORT: ${GRAPHQL_PORT:-20212}                   # GraphQL API port (passed into APP_CONF_OVERRIDE at runtime)\n  #    NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0}                 # 0=kill all services and restart if any dies. 1 keeps running dead services.\n\n    # Resource limits to prevent resource exhaustion\n    mem_limit: 2048m            # Maximum memory usage\n    mem_reservation: 1024m      # Soft memory limit\n    cpu_shares: 512             # Relative CPU weight for CPU contention scenarios\n    pids_limit: 512             # Limit the number of processes/threads to prevent fork bombs\n    logging:\n      driver: \"json-file\"       # Use JSON file logging driver\n      options:\n        max-size: \"10m\"         # Rotate log files after they reach 10MB\n        max-file: \"3\"           # Keep a maximum of 3 log files\n\n    # Always restart the container unless explicitly stopped\n    restart: unless-stopped\n\nvolumes:                        # Persistent volume for configuration and database storage\n  netalertx_data:\n

Run or re-run it:

docker compose up --force-recreate\n
"},{"location":"DOCKER_COMPOSE/#customize-with-environmental-variables","title":"Customize with Environmental Variables","text":"

You can override the default settings by passing environmental variables to the docker compose up command.

Example using a single variable:

This command runs NetAlertX on port 8080 instead of the default 20211.

PORT=8080 docker compose up\n

Example using all available variables:

This command demonstrates overriding all primary environmental variables: running with host networking, on port 20211, GraphQL on 20212, and listening on all IPs.

NETALERTX_NETWORK_MODE=host \\\nLISTEN_ADDR=0.0.0.0 \\\nPORT=20211 \\\nGRAPHQL_PORT=20212 \\\nNETALERTX_DEBUG=0 \\\ndocker compose up\n
"},{"location":"DOCKER_COMPOSE/#docker-composeyaml-modifications","title":"docker-compose.yaml Modifications","text":""},{"location":"DOCKER_COMPOSE/#modification-1-use-a-local-folder-bind-mount","title":"Modification 1: Use a Local Folder (Bind Mount)","text":"

By default, the baseline compose file uses a single named volume (netalertx_data) mounted at /data. This single-volume layout is preferred because NetAlertX manages both configuration and the database under /data (for example, /data/config and /data/db) via its web UI. Using one named volume simplifies permissions and portability: Docker manages the storage and NetAlertX manages the files inside /data.

A two-volume layout that mounts /data/config and /data/db separately (for example, netalertx_config and netalertx_db) is supported for backward compatibility and some advanced workflows, but it is an abnormal/legacy layout and not recommended for new deployments.

However, if you prefer to have direct, file-level access to your configuration for manual editing, a \"bind mount\" is a simple alternative. This tells Docker to use a specific folder from your computer (the \"host\") inside the container.

How to make the change:

  1. Choose a location on your computer. For example, /local_data_dir.

  2. Create the subfolders: mkdir -p /local_data_dir/config and mkdir -p /local_data_dir/db.

  3. Edit your docker-compose.yml and find the volumes: section (the one inside the netalertx: service).

  4. Comment out (add a # in front) or delete the type: volume blocks for netalertx_config and netalertx_db.

  5. Add new lines pointing to your local folders.

Before (Using Named Volumes - Preferred):

...\n    volumes:\n      - netalertx_config:/data/config:rw #short-form volume (no /path is a short volume)\n      - netalertx_db:/data/db:rw\n...\n

After (Using a Local Folder / Bind Mount): Make sure to replace /local_data_dir with your actual path. The format is <path_on_your_computer>:<path_inside_container>:<options>.

...\n    volumes:\n#      - netalertx_config:/data/config:rw\n#      - netalertx_db:/data/db:rw\n      - /local_data_dir/config:/data/config:rw\n      - /local_data_dir/db:/data/db:rw\n...\n

Now, any files created by NetAlertX in /data/config will appear in your /local_data_dir/config folder.

This same method works for mounting other things, like custom plugins or enterprise NGINX files, as shown in the commented-out examples in the baseline file.

"},{"location":"DOCKER_COMPOSE/#example-configuration-summaries","title":"Example Configuration Summaries","text":"

Here are the essential modifications for common alternative setups.

"},{"location":"DOCKER_COMPOSE/#example-2-external-env-file-for-paths","title":"Example 2: External .env File for Paths","text":"

This method is useful for keeping your paths and other settings separate from your main compose file, making it more portable.

docker-compose.yml changes:

...\nservices:\n  netalertx:\n    environment:\n      - PORT=${PORT}\n      - GRAPHQL_PORT=${GRAPHQL_PORT}\n\n...\n

.env file contents:

PORT=20211\nNETALERTX_NETWORK_MODE=host\nLISTEN_ADDR=0.0.0.0\nGRAPHQL_PORT=20212\n

Run with: sudo docker-compose --env-file /path/to/.env up

"},{"location":"DOCKER_COMPOSE/#example-3-docker-swarm","title":"Example 3: Docker Swarm","text":"

This is for deploying on a Docker Swarm cluster. The key differences from the baseline are the removal of network_mode: from the service, and the addition of deploy: and networks: blocks at both the service and top-level.

Here are the only changes you need to make to the baseline compose file to make it Swarm-compatible.

services:\n  netalertx:\n    ...\n    #    network_mode: ${NETALERTX_NETWORK_MODE:-host} # <-- DELETE THIS LINE\n    ...\n\n    # 2. ADD a 'networks:' block INSIDE the service to connect to the external host network.\n    networks:\n      - outside\n    # 3. ADD a 'deploy:' block to manage the service as a swarm replica.\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n\n\n# 4. ADD a new top-level 'networks:' block at the end of the file to define 'outside' as the external 'host' network.\nnetworks:\n  outside:\n    external:\n      name: \"host\"\n
"},{"location":"DOCKER_INSTALLATION/","title":"Docker Guide","text":""},{"location":"DOCKER_INSTALLATION/#netalertx-network-scanner-notification-framework","title":"NetAlertX - Network scanner & notification framework","text":"\ud83d\udcd1 Docker guide \ud83d\ude80 Releases \ud83d\udcda Docs \ud83d\udd0c Plugins \ud83e\udd16 Ask AI

Head to https://netalertx.com/ for more gifs and screenshots \ud83d\udcf7.

Note

There is also an experimental \ud83e\uddea bare-metal install method available.

"},{"location":"DOCKER_INSTALLATION/#basic-usage","title":"\ud83d\udcd5 Basic Usage","text":"

Warning

You will have to run the container on the host network and specify SCAN_SUBNETS unless you use other plugin scanners. The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.

docker run -d --rm --network=host \\\n  -v /local_data_dir:/data \\\n  -v /etc/localtime:/etc/localtime \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  -e PORT=20211 \\\n  -e APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"} \\\n  ghcr.io/jokob-sk/netalertx:latest\n

See alternative docked-compose examples.

"},{"location":"DOCKER_INSTALLATION/#default-ports","title":"Default ports","text":"Default Description How to override 20211 Port of the web interface -e PORT=20222 20212 Port of the backend API server -e APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"} or via the GRAPHQL_PORT Setting"},{"location":"DOCKER_INSTALLATION/#docker-environment-variables","title":"Docker environment variables","text":"Variable Description Example Value PORT Port of the web interface 20211 LISTEN_ADDR Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. 0.0.0.0 LOADED_PLUGINS Default plugins to load. Plugins cannot be loaded with APP_CONF_OVERRIDE, you need to use this variable instead and then specify the plugins settings with APP_CONF_OVERRIDE. [\"PIHOLE\",\"ASUSWRT\"] APP_CONF_OVERRIDE JSON override for settings (except LOADED_PLUGINS). {\"SCAN_SUBNETS\":\"['192.168.1.0/24 --interface=eth1']\",\"GRAPHQL_PORT\":\"20212\"} ALWAYS_FRESH_INSTALL \u26a0 If true will delete the content of the /db & /config folders. For testing purposes. Can be coupled with watchtower to have an always freshly installed netalertx/netalertx-dev image. true

You can override the default GraphQL port setting GRAPHQL_PORT (set to 20212) by using the APP_CONF_OVERRIDE env variable. LOADED_PLUGINS and settings in APP_CONF_OVERRIDE can be specified via the UI as well.

"},{"location":"DOCKER_INSTALLATION/#docker-paths","title":"Docker paths","text":"

Note

See also Backup strategies.

Required Path Description \u2705 :/data Folder which needs to contain a /db and /config sub-folders. \u2705 /etc/localtime:/etc/localtime:ro Ensuring the timezone is the same as on the server. :/tmp/log Logs folder useful for debugging if you have issues setting up the container :/tmp/api The API endpoint containing static (but regularly updated) json and other files. Path configurable via NETALERTX_API environment variable. :/app/front/plugins/<plugin>/ignore_plugin Map a file ignore_plugin to ignore a plugin. Plugins can be soft-disabled via settings. More in the Plugin docs. :/etc/resolv.conf Use a custom resolv.conf file for better name resolution."},{"location":"DOCKER_INSTALLATION/#folder-structure","title":"Folder structure","text":"

Use separate db and config directories, do not nest them:

data\n\u251c\u2500\u2500 config\n\u2514\u2500\u2500 db\n
"},{"location":"DOCKER_INSTALLATION/#permissions","title":"Permissions","text":"

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

sudo chown -R 20211:20211 /local_data_dir\nsudo chmod -R a+rwx /local_data_dir\n
"},{"location":"DOCKER_INSTALLATION/#initial-setup","title":"Initial setup","text":"
  • If unavailable, the app generates a default app.conf and app.db file on the first run.
  • The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify app.conf in the /data/config/ folder directly
"},{"location":"DOCKER_INSTALLATION/#setting-up-scanners","title":"Setting up scanners","text":"

You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default ARPSCAN plugin, you have to specify at least one valid subnet and interface in the SCAN_SUBNETS setting. See the documentation on How to set up multiple SUBNETS, VLANs and what are limitations for troubleshooting and more advanced scenarios.

If you are running PiHole you can synchronize devices directly. Check the PiHole configuration guide for details.

Note

You can bulk-import devices via the CSV import method.

"},{"location":"DOCKER_INSTALLATION/#community-guides","title":"Community guides","text":"

You can read or watch several community configuration guides in Chinese, Korean, German, or French.

Please note these might be outdated. Rely on official documentation first.

"},{"location":"DOCKER_INSTALLATION/#common-issues","title":"Common issues","text":"
  • Before creating a new issue, please check if a similar issue was already resolved.
  • Check also common issues and debugging tips.
"},{"location":"DOCKER_INSTALLATION/#support-me","title":"\ud83d\udc99 Support me","text":"
  • Bitcoin: 1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM
  • Ethereum: 0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7

\ud83d\udce7 Email me at netalertx@gmail.com if you want to get in touch or if I should add other sponsorship platforms.

"},{"location":"DOCKER_MAINTENANCE/","title":"The NetAlertX Container Operator's Guide","text":"

Warning

\u26a0\ufe0f Important: The docker-compose has recently changed. Carefully read the Migration guide for detailed instructions.

This guide assumes you are starting with the official docker-compose.yml file provided with the project. We strongly recommend you start with or migrate to this file as your baseline and modify it to suit your specific needs (e.g., changing file paths). While there are many ways to configure NetAlertX, the default file is designed to meet the mandatory security baseline with layer-2 networking capabilities while operating securely and without startup warnings.

This guide provides direct, concise solutions for common NetAlertX administrative tasks. It is structured to help you identify a problem, implement the solution, and understand the details.

"},{"location":"DOCKER_MAINTENANCE/#guide-contents","title":"Guide Contents","text":"
  • Using a Local Folder for Configuration
  • Migrating from a Local Folder to a Docker Volume
  • Applying a Custom Nginx Configuration
  • Mounting Additional Files for Plugins

Note

Other relevant resources - Fixing Permission Issues - Handling Backups - Accessing Application Logs

"},{"location":"DOCKER_MAINTENANCE/#task-using-a-local-folder-for-configuration","title":"Task: Using a Local Folder for Configuration","text":""},{"location":"DOCKER_MAINTENANCE/#problem","title":"Problem","text":"

You want to edit your app.conf and other configuration files directly from your host machine, instead of using a Docker-managed volume.

"},{"location":"DOCKER_MAINTENANCE/#solution","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. (Optional but Recommended) Back up your data using the method in Part 1. 3. Create a local folder on your host machine (e.g., /data/netalertx_config). 4. Edit docker-compose.yml:

  • Comment out the netalertx_config volume entry.
  • Uncomment and set the path for the \"Example custom local folder\" bind mount entry.

yaml ... volumes: # - type: volume # source: netalertx_config # target: /data/config # read_only: false ... # Example custom local folder called /data/netalertx_config - type: bind source: /data/netalertx_config target: /data/config read_only: false ... 5. (Optional) Restore your backup. 6. Restart the container:

bash docker-compose up -d

"},{"location":"DOCKER_MAINTENANCE/#about-this-method","title":"About This Method","text":"

This replaces the Docker-managed volume with a \"bind mount.\" This is a direct mapping between a folder on your host computer (/data/netalertx_config) and a folder inside the container (/data/config), allowing you to edit the files directly.

"},{"location":"DOCKER_MAINTENANCE/#task-migrating-from-a-local-folder-to-a-docker-volume","title":"Task: Migrating from a Local Folder to a Docker Volume","text":""},{"location":"DOCKER_MAINTENANCE/#problem_1","title":"Problem","text":"

You are currently using a local folder (bind mount) for your configuration (e.g., /data/netalertx_config) and want to switch to the recommended Docker-managed volume (netalertx_config).

"},{"location":"DOCKER_MAINTENANCE/#solution_1","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. Edit docker-compose.yml:

  • Comment out the bind mount entry for your local folder.
  • Uncomment the netalertx_config volume entry.

yaml ... volumes: - type: volume source: netalertx_config target: /data/config read_only: false ... # Example custom local folder called /data/netalertx_config # - type: bind # source: /data/netalertx_config # target: /data/config # read_only: false ... 3. (Optional) Initialize the volume:

bash docker-compose up -d && docker-compose down 4. Run the migration command (replace /data/netalertx_config with your actual path):

bash docker run --rm -v netalertx_config:/config -v /data/netalertx_config:/local-config alpine \\ sh -c \"tar -C /local-config -c . | tar -C /config -x\" 5. Restart the container:

bash docker-compose up -d

"},{"location":"DOCKER_MAINTENANCE/#about-this-method_1","title":"About This Method","text":"

This uses a temporary alpine container that mounts both your source folder (/local-config) and destination volume (/config). The tar ... | tar ... command safely copies all files, including hidden ones, preserving structure.

"},{"location":"DOCKER_MAINTENANCE/#task-applying-a-custom-nginx-configuration","title":"Task: Applying a Custom Nginx Configuration","text":""},{"location":"DOCKER_MAINTENANCE/#problem_2","title":"Problem","text":"

You need to override the default Nginx configuration to add features like LDAP, SSO, or custom SSL settings.

"},{"location":"DOCKER_MAINTENANCE/#solution_2","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. Create your custom config file on your host (e.g., /data/my-netalertx.conf). 3. Edit docker-compose.yml:

yaml ... # Use a custom Enterprise-configured nginx config for ldap or other settings - /data/my-netalertx.conf:/tmp/nginx/active-config/netalertx.conf:ro ... 4. Restart the container:

bash docker-compose up -d

"},{"location":"DOCKER_MAINTENANCE/#about-this-method_2","title":"About This Method","text":"

Docker\u2019s bind mount overlays your host file (my-netalertx.conf) on top of the default file inside the container. The container remains read-only, but Nginx reads your file as if it were the default.

"},{"location":"DOCKER_MAINTENANCE/#task-mounting-additional-files-for-plugins","title":"Task: Mounting Additional Files for Plugins","text":""},{"location":"DOCKER_MAINTENANCE/#problem_3","title":"Problem","text":"

A plugin (like DHCPLSS) needs to read a file from your host machine (e.g., /var/lib/dhcp/dhcpd.leases).

"},{"location":"DOCKER_MAINTENANCE/#solution_3","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. Edit docker-compose.yml and add a new line under the volumes: section:

yaml ... volumes: ... # Mount for DHCPLSS plugin - /var/lib/dhcp/dhcpd.leases:/mnt/dhcpd.leases:ro ... 3. Restart the container:

bash docker-compose up -d 4. In the NetAlertX web UI, configure the plugin to read from:

/mnt/dhcpd.leases

"},{"location":"DOCKER_MAINTENANCE/#about-this-method_3","title":"About This Method","text":"

This maps your host file to a new, read-only (:ro) location inside the container. The plugin can then safely read this file without exposing anything else on your host filesystem.

"},{"location":"DOCKER_PORTAINER/","title":"Deploying NetAlertX in Portainer (via Stacks)","text":"

This guide shows you how to set up NetAlertX using Portainer\u2019s Stacks feature.

"},{"location":"DOCKER_PORTAINER/#1-prepare-your-host","title":"1. Prepare Your Host","text":"

Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace APP_FOLDER with your preferred location, for example /local_data_dir here:

mkdir -p /local_data_dir/netalertx/config\nmkdir -p /local_data_dir/netalertx/db\nmkdir -p /local_data_dir/netalertx/log\n
"},{"location":"DOCKER_PORTAINER/#2-open-portainer-stacks","title":"2. Open Portainer Stacks","text":"
  1. Log in to your Portainer UI.
  2. Navigate to Stacks \u2192 Add stack.
  3. Give your stack a name (e.g., netalertx).
"},{"location":"DOCKER_PORTAINER/#3-paste-the-stack-configuration","title":"3. Paste the Stack Configuration","text":"

Copy and paste the following YAML into the Web editor:

services:\n  netalertx:\n    container_name: netalertx\n    # Use this line for stable release\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    # Or, use this for the latest development build\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    cap_drop:       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:        # Re-add necessary capabilities\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n    volumes:\n      - ${APP_FOLDER}/netalertx/config:/data/config\n      - ${APP_FOLDER}/netalertx/db:/data/db\n      # to sync with system time\n      - /etc/localtime:/etc/localtime:ro\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n    environment:\n      - PORT=${PORT}\n      - APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}\n
"},{"location":"DOCKER_PORTAINER/#4-configure-environment-variables","title":"4. Configure Environment Variables","text":"

In the Environment variables section of Portainer, add the following:

  • APP_FOLDER=/local_data_dir (or wherever you created the directories in step 1)
  • PORT=22022 (or another port if needed)
  • APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"22023\"} (optional advanced settings, otherwise the backend API server PORT defaults to 20212)
"},{"location":"DOCKER_PORTAINER/#5-ensure-permissions","title":"5. Ensure permissions","text":"

Tip

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

sudo chown -R 20211:20211 /local_data_dir

sudo chmod -R a+rwx /local_data_dir

"},{"location":"DOCKER_PORTAINER/#6-deploy-the-stack","title":"6. Deploy the Stack","text":"
  1. Scroll down and click Deploy the stack.
  2. Portainer will pull the image and start NetAlertX.
  3. Once running, access the app at:
http://<your-docker-host-ip>:22022\n
"},{"location":"DOCKER_PORTAINER/#7-verify-and-troubleshoot","title":"7. Verify and Troubleshoot","text":"
  • Check logs via Portainer \u2192 Containers \u2192 netalertx \u2192 Logs.
  • Logs are stored under ${APP_FOLDER}/netalertx/log if you enabled that volume.

Once the application is running, configure it by reading the initial setup guide, or troubleshoot common issues.

"},{"location":"DOCKER_SWARM/","title":"Docker Swarm Deployment Guide (IPvlan)","text":"

This guide describes how to deploy NetAlertX in a Docker Swarm environment using an ipvlan network. This enables the container to receive a LAN IP address directly, which is ideal for network monitoring.

"},{"location":"DOCKER_SWARM/#step-1-create-an-ipvlan-config-only-network-on-all-nodes","title":"\u2699\ufe0f Step 1: Create an IPvlan Config-Only Network on All Nodes","text":"

Run this command on each node in the Swarm.

docker network create -d ipvlan \\\n  --subnet=192.168.1.0/24 \\              # \ud83d\udd27 Replace with your LAN subnet\n  --gateway=192.168.1.1 \\                # \ud83d\udd27 Replace with your LAN gateway\n  -o ipvlan_mode=l2 \\\n  -o parent=eno1 \\                       # \ud83d\udd27 Replace with your network interface (e.g., eth0, eno1)\n  --config-only \\\n  ipvlan-swarm-config\n
"},{"location":"DOCKER_SWARM/#step-2-create-the-swarm-scoped-ipvlan-network-one-time-setup","title":"\ud83d\udda5\ufe0f Step 2: Create the Swarm-Scoped IPvlan Network (One-Time Setup)","text":"

Run this on one Swarm manager node only.

docker network create -d ipvlan \\\n  --scope swarm \\\n  --config-from ipvlan-swarm-config \\\n  swarm-ipvlan\n
"},{"location":"DOCKER_SWARM/#step-3-deploy-netalertx-with-docker-compose","title":"\ud83e\uddfe Step 3: Deploy NetAlertX with Docker Compose","text":"

Use the following Compose snippet to deploy NetAlertX with a static LAN IP assigned via the swarm-ipvlan network.

services:\n  netalertx:\n    image: ghcr.io/jokob-sk/netalertx:latest\n...\n    networks:\n      swarm-ipvlan:\n        ipv4_address: 192.168.1.240     # \u26a0\ufe0f Choose a free IP from your LAN\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n      placement:\n        constraints:\n          - node.role == manager        # \ud83d\udd04 Or use: node.labels.netalertx == true\n\nnetworks:\n  swarm-ipvlan:\n    external: true\n
"},{"location":"DOCKER_SWARM/#notes","title":"\u2705 Notes","text":"
  • The ipvlan setup allows NetAlertX to have a direct IP on your LAN.
  • Replace eno1 with your interface, IP addresses, and volume paths to match your environment.
  • Make sure the assigned IP (192.168.1.240 above) is not in use or managed by DHCP.
  • You may also use a node label constraint instead of node.role == manager for more control.
"},{"location":"FILE_PERMISSIONS/","title":"Managing File Permissions for NetAlertX on a Read-Only Container","text":"

Sometimes, permission issues arise if your existing host directories were created by a previous container running as root or another UID. The container will fail to start with \"Permission Denied\" errors.

Tip

NetAlertX runs in a secure, read-only Alpine-based container under a dedicated netalertx user (UID 20211, GID 20211). All writable paths are either mounted as persistent volumes or tmpfs filesystems. This ensures consistent file ownership and prevents privilege escalation.

Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server.

docker run --rm --network=host \\\n  -v /etc/localtime:/etc/localtime:ro \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  -e PORT=20211 \\\n  ghcr.io/jokob-sk/netalertx:latest\n

Warning

The above should be only used as a test - once the container restarts, all data is lost.

"},{"location":"FILE_PERMISSIONS/#writable-paths","title":"Writable Paths","text":"

NetAlertX requires certain paths to be writable at runtime. These paths should be mounted either as host volumes or tmpfs in your docker-compose.yml or docker run command:

Path Purpose Notes /data/config Application configuration Persistent volume recommended /data/db Database files Persistent volume recommended /tmp/log Logs Lives under /tmp; optional host bind to retain logs /tmp/api API cache Subdirectory of /tmp /tmp/nginx/active-config Active nginx configuration override Mount /tmp (or override specific file) /tmp/run Runtime directories for nginx & PHP Subdirectory of /tmp /tmp PHP session save directory Backed by tmpfs for runtime writes

Mounting /tmp as tmpfs automatically covers all of its subdirectories (log, api, run, nginx/active-config, etc.).

All these paths will have UID 20211 / GID 20211 inside the container. Files on the host will appear owned by 20211:20211.

"},{"location":"FILE_PERMISSIONS/#solution","title":"Solution","text":"
  1. Run the container once as root (--user \"0\") to allow it to correct permissions automatically:
docker run -it --rm --name netalertx --user \"0\" \\\n  -v /local_data_dir:/data \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  ghcr.io/jokob-sk/netalertx:latest\n
  1. Wait for logs showing permissions being fixed. The container will then hang intentionally.
  2. Press Ctrl+C to stop the container.
  3. Start the container normally with your docker-compose.yml or docker run command.

The container startup script detects root and runs chown -R 20211:20211 on all volumes, fixing ownership for the secure netalertx user.

Tip

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

sudo chown -R 20211:20211 /local_data_dir

sudo chmod -R a+rwx /local_data_dir

"},{"location":"FILE_PERMISSIONS/#example-docker-composeyml-with-tmpfs","title":"Example: docker-compose.yml with tmpfs","text":"
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx\"\n    network_mode: \"host\"\n    cap_drop:                                       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:                                        # Add only the necessary capabilities\n      - NET_ADMIN                                   # Required for ARP scanning\n      - NET_RAW                                     # Required for raw socket operations\n      - NET_BIND_SERVICE                            # Required to bind to privileged ports (nbtscan)\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir:/data\n      - /etc/localtime:/etc/localtime\n    environment:\n      - PORT=20211\n    tmpfs:\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n

This setup ensures all writable paths are either in tmpfs or host-mounted, and the container never writes outside of controlled volumes.

"},{"location":"FIX_OFFLINE_DETECTION/","title":"Troubleshooting: Devices Show Offline When They Are Online","text":"

In some network setups, certain devices may intermittently appear as offline in NetAlertX, even though they are connected and responsive. This issue is often more noticeable with devices that have higher IP addresses within the subnet.

Note

Network presence graph showing increased drop outs before enabling additional ICMP scans and continuous online presence after following this guide. This graph shows a sudden spike in drop outs probably caused by a device software update.

"},{"location":"FIX_OFFLINE_DETECTION/#symptoms","title":"Symptoms","text":"
  • Devices sporadically show as offline in the presence timeline.
  • This behavior often affects devices with higher IPs (e.g., 192.168.1.240+).
  • Presence data appears inconsistent or unreliable despite the device being online.
"},{"location":"FIX_OFFLINE_DETECTION/#cause","title":"Cause","text":"

This issue is typically related to scanning limitations:

  • ARP scan timeouts may prevent full subnet coverage.
  • Sole reliance on ARP can result in missed detections:

  • Some devices (like iPhones) suppress or reject frequent ARP requests.

  • ARP responses may be blocked or delayed due to power-saving features or OS behavior.

  • Scanning frequency conflicts, where devices ignore repeated scans within a short period.

"},{"location":"FIX_OFFLINE_DETECTION/#recommended-fixes","title":"Recommended Fixes","text":"

To improve presence accuracy and reduce false offline states:

"},{"location":"FIX_OFFLINE_DETECTION/#increase-arp-scan-timeout","title":"\u2705 Increase ARP Scan Timeout","text":"

Extend the ARP scanner timeout and DURATION to ensure full subnet coverage:

ARPSCAN_RUN_TIMEOUT=360\nARPSCAN_DURATION=30\n

Adjust based on your network size and device count.

"},{"location":"FIX_OFFLINE_DETECTION/#add-icmp-ping-scanning","title":"\u2705 Add ICMP (Ping) Scanning","text":"

Enable the ICMP scan plugin to complement ARP detection. ICMP is often more reliable for detecting active hosts, especially when ARP fails.

"},{"location":"FIX_OFFLINE_DETECTION/#use-multiple-detection-methods","title":"\u2705 Use Multiple Detection Methods","text":"

A combined approach greatly improves detection robustness:

  • ARPSCAN (default)
  • ICMP (ping)
  • NMAPDEV (nmap)

This hybrid strategy increases reliability, especially for down detection and alerting. See other plugins that might be compatible with your setup. See benefits and drawbacks of individual scan methods in their respective docs.

"},{"location":"FIX_OFFLINE_DETECTION/#results","title":"Results","text":"

After increasing the ARP timeout and adding ICMP scanning (on select IP ranges), users typically report:

  • More consistent presence graphs
  • Fewer false offline events
  • Better coverage across all IP ranges
"},{"location":"FIX_OFFLINE_DETECTION/#summary","title":"Summary","text":"Setting Recommendation ARPSCAN_RUN_TIMEOUT Increase to ensure scans reach all IPs ICMP Scan Enable to detect devices ARP might miss Multi-method Scanning Use a mix of ARP, ICMP, and NMAP-based methods

Tip: Each environment is unique. Consider fine-tuning scan settings based on your network size, device behavior, and desired detection accuracy.

Let us know in the NetAlertX Discussions if you have further feedback or edge cases.

See also Remote Networks for more advanced setups.

"},{"location":"FRONTEND_DEVELOPMENT/","title":"Frontend development","text":"

This page contains tips for frontend development when extending NetAlertX. Guiding principles are:

  1. Maintainability
  2. Extendability
  3. Reusability
  4. Placing more functionality into Plugins and enhancing core Plugins functionality

That means that, when writing code, focus on reusing what's available instead of writing quick fixes. Or creating reusable functions, instead of bespoke functionaility.

"},{"location":"FRONTEND_DEVELOPMENT/#examples","title":"\ud83d\udd0d Examples","text":"

Some examples how to apply the above:

Example 1

I want to implement a scan fucntion. Options would be:

  1. To add a manual scan functionality to the deviceDetails.php page.
  2. To create a separate page that handles the execution of the scan.
  3. To create a configurable Plugin.

From the above, number 3 would be the most appropriate solution. Then followed by number 2. Number 1 would be approved only in special circumstances.

Example 2

I want to change the behavior of the application. Options to implement this could be:

  1. Hard-code the changes in the code.
  2. Implement the changes and add settings to influence the behavior in the initialize.py file so the user can adjust these.
  3. Implement the changes and add settings via a setting-only plugin.
  4. Implement the changes in a way so the behavior can be toggled on each plugin so the core capabilities of Plugins get extended.

From the above, number 4 would be the most appropriate solution. Then followed by number 3. Number 1 or 2 would be approved only in special circumstances.

"},{"location":"FRONTEND_DEVELOPMENT/#frontend-tips","title":"\ud83d\udca1 Frontend tips","text":"

Some useful frontend JavaScript functions:

  • getDevDataByMac(macAddress, devicesColumn) - method to retrieve any device data (database column) based on MAC address in the frontend
  • getString(string stringKey) - method to retrieve translated strings in the frontend
  • getSetting(string stringKey) - method to retrieve settings in the frontend

Check the common.js file for more frontend functions.

"},{"location":"HELPER_SCRIPTS/","title":"Community Helper Scripts Overview","text":"

This page provides an overview of community-contributed scripts for NetAlertX. These scripts are not actively maintained and are provided as-is.

"},{"location":"HELPER_SCRIPTS/#community-scripts","title":"Community Scripts","text":"

You can find all scripts in this scripts GitHub folder.

Script Name Description Author Version Release Date New Devices Checkmk Script Checks for new devices in NetAlertX and reports status to Checkmk. N/A 1.0 08-Jan-2025 DB Cleanup Script Queries and removes old device-related entries from the database. laxduke 1.0 23-Dec-2024 OPNsense DHCP Lease Converter Retrieves DHCP lease data from OPNsense and converts it to dnsmasq format. im-redactd 1.0 24-Feb-2025"},{"location":"HELPER_SCRIPTS/#important-notes","title":"Important Notes","text":"

Note

These scripts are community-supplied and not actively maintained. Use at your own discretion.

For detailed usage instructions, refer to each script's documentation in each scripts GitHub folder.

"},{"location":"HOME_ASSISTANT/","title":"Home Assistant integration overview","text":"

NetAlertX comes with MQTT support, allowing you to show all detected devices as devices in Home Assistant. It also supplies a collection of stats, such as number of online devices.

Tip

You can install NetAlertX also as a Home Assistant addon via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

"},{"location":"HOME_ASSISTANT/#note","title":"\u26a0 Note","text":"
  • Please note that discovery takes about ~10s per device.
  • Deleting of devices is not handled automatically. Please use MQTT Explorer to delete devices in the broker (Home Assistant), if needed.
  • For optimization reasons, the devices are not always fully synchronized. You can delete Plugin objects as described in the MQTT plugin docs to force a full synchronization.
"},{"location":"HOME_ASSISTANT/#guide","title":"\ud83e\udded Guide","text":"

\ud83d\udca1 This guide was tested only with the Mosquitto MQTT broker

  1. Enable Mosquitto MQTT in Home Assistant by following the documentation

  2. Configure a user name and password on your broker.

  3. Note down the following details that you will need to configure NetAlertX:

    • MQTT host url (usually your Home Assistant IP)
    • MQTT broker port
    • User
    • Password
  4. Open the NetAlertX > Settings > MQTT settings group

    • Enable MQTT
    • Fill in the details from above
    • Fill in remaining settings as per description
    • set MQTT_RUN to schedule or on_notification depending on requirements

"},{"location":"HOME_ASSISTANT/#screenshots","title":"\ud83d\udcf7 Screenshots","text":""},{"location":"HOME_ASSISTANT/#troubleshooting","title":"Troubleshooting","text":"

If you can't see all devices detected, run sudo arp-scan --interface=eth0 192.168.1.0/24 (change these based on your setup, read Subnets docs for details). This command has to be executed the NetAlertX container, not in the Home Assistant container.

You can access the NetAlertX container via Portainer on your host or via ssh. The container name will be something like addon_db21ed7f_netalertx (you can copy the db21ed7f_netalertx part from the browser when accessing the UI of NetAlertX).

"},{"location":"HOME_ASSISTANT/#accessing-the-netalertx-container-via-ssh","title":"Accessing the NetAlertX container via SSH","text":"
  1. Log into your Home Assistant host via SSH
local@local:~ $ ssh pi@192.168.1.9\n
  1. Find the NetAlertX container name, in this case addon_db21ed7f_netalertx
pi@raspberrypi:~ $ sudo docker container ls | grep netalertx\n06c540d97f67   ghcr.io/alexbelgium/netalertx-armv7:25.3.1                   \"/init\"               6 days ago      Up 6 days (healthy)    addon_db21ed7f_netalertx\n
  1. SSH into the NetAlertX cointainer
pi@raspberrypi:~ $ sudo docker exec -it addon_db21ed7f_netalertx  /bin/sh\n/ #\n
  1. Execute a test asrp-scan scan
/ # sudo arp-scan --ignoredups --retry=6 192.168.1.0/24 --interface=eth0\nInterface: eth0, type: EN10MB, MAC: dc:a6:32:73:8a:b1, IPv4: 192.168.1.9\nStarting arp-scan 1.10.0 with 256 hosts (https://github.com/royhills/arp-scan)\n192.168.1.1     74:ac:b9:54:09:fb       Ubiquiti Networks Inc.\n192.168.1.21    74:ac:b9:ad:c3:30       Ubiquiti Networks Inc.\n192.168.1.58    1c:69:7a:a2:34:7b       EliteGroup Computer Systems Co., LTD\n192.168.1.57    f4:92:bf:a3:f3:56       Ubiquiti Networks Inc.\n...\n

If your result doesn't contain results similar to the above, double check your subnet, interface and if you are dealing with an inaccessible network segment, read the Remote networks documentation.

"},{"location":"HW_INSTALL/","title":"How to install NetAlertX on the server hardware","text":"

To download and install NetAlertX on the hardware/server directly use the curl or wget commands at the bottom of this page.

Note

This is an Experimental feature \ud83e\uddea and it relies on community support.

\ud83d\ude4f Looking for maintainers for this installation method \ud83d\ude42 Current community volunteers: - slammingprogramming - ingoratsdorf

There is no guarantee that the install script or any other script will gracefully handle other installed software. Data loss is a possibility, it is recommended to install NetAlertX using the supplied Docker image.

Warning

A warning to the installation method below: Piping to bash is controversial and may be dangerous, as you cannot see the code that's about to be executed on your system.

If you trust this repo, you can download the install script via one of the methods (curl/wget) below and it will fo its best to install NetAlertX on your system.

Alternatively you can download the installation script from the repository and check the code yourself.

NetAlertX will be installed in /app and run on port number 20211.

Some facts about what and where something will be changed/installed by the HW install setup (may not contain everything!):

  • dependencies will be installed from the respective system repos
  • required python modules will be installed
  • /app directory will be deleted and newly created
  • /app will contain the whole repository (downloaded by the install script)
  • The default NGINX site /etc/nginx/sites-enabled/default will be disabled (sym-link deleted or backed up to sites-available)
  • /var/www/html/netalertx directory will be deleted and newly created
  • /etc/nginx/conf.d/netalertx.conf will be sym-linked to the appropriate installer location (depending on your system installer script)
  • Some files (IEEE device vendors info, ...) will be created in the directory where the installation script is executed
"},{"location":"HW_INSTALL/#limitations","title":"Limitations","text":"
  • No system service is provided. NetAlertX must be started using /app/install/<system>/start.<system>.sh.
  • No checks for other running software is done.
  • Only tested to work on the system listed in the install directory.
  • EXPERIMENTAL and not recommended way to install NetAlertX.

Tip

If the below fails try grabbing and installing one of the previous releases and run the installation from the zip package.

These commands will download the install.debian12.sh script from the GitHub repository, make it executable with chmod, and then run it using ./install.debian12.sh.

Make sure you have the necessary permissions to execute the script.

"},{"location":"HW_INSTALL/#debian-12-bookworm","title":"\ud83d\udce5 Debian 12 (Bookworm)","text":""},{"location":"HW_INSTALL/#installation-via-curl","title":"Installation via curl","text":"
curl -o install.debian12.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh\n
"},{"location":"HW_INSTALL/#installation-via-wget","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh -O install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh\n
"},{"location":"HW_INSTALL/#ubuntu-24-noble-numbat","title":"\ud83d\udce5 Ubuntu 24 (Noble Numbat)","text":"

Note

Maintained by ingoratsdorf

"},{"location":"HW_INSTALL/#installation-via-curl_1","title":"Installation via curl","text":"
curl -o install.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh && sudo chmod +x install.sh && sudo ./install.sh\n
"},{"location":"HW_INSTALL/#installation-via-wget_1","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh -O install.sh && sudo chmod +x install.sh && sudo ./install.sh\n
"},{"location":"HW_INSTALL/#bare-metal-proxmox","title":"\ud83d\udce5 Bare Metal - Proxmox","text":"

Note

Use this on a clean LXC/VM for Debian 13 OR Ubuntu 24. The Scipt will detect OS and build acordingly. Maintained by JVKeller

"},{"location":"HW_INSTALL/#installation-via-wget_2","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh\n
"},{"location":"ICONS/","title":"Icons","text":""},{"location":"ICONS/#icons-overview","title":"Icons overview","text":"

Icons are used to visually distinguish devices in the app in most of the device listing tables and the network tree.

"},{"location":"ICONS/#icons-support","title":"Icons Support","text":"

Two types of icons are suported:

  • Free Font Awesome icons (up-to v 6.4.0)
  • SVG icons (for example from iconify.design)

You can assign icons individually on each device in the Details tab.

"},{"location":"ICONS/#adding-new-icons","title":"Adding new icons","text":"
  1. You can get an SVG or a Font awesome HTML code

Copying the SVG (for example from iconify.design):

Copying the HTML code from Font Awesome.

  1. Navigate to the device you want to use the icon on and click the \"+\" icon:

  1. Paste in the copied HTML or SVG code and click \"OK\":

  1. \"Save\" the device

Note

If you want to mass-apply an icon to all devices of the same device type (Field: Type), you can click the mass-copy button (next to the \"+\" button). A confirmation prompt is displayed. If you proceed, icons of all devices set to the same device type as the current device, will be overwritten with the current device's icon.

  • The dropdown contains all icons already used in the app for device icons. You might need to navigate away or refresh the page once you add a new icon.
"},{"location":"ICONS/#font-awesome-pro-icons","title":"Font Awesome Pro icons","text":"

If you own the premium package of Font Awesome icons you can mount it in your Docker container the following way:

/font-awesome:/app/front/lib/font-awesome:ro\n

You can use the full range of Font Awesome icons afterwards.

"},{"location":"INITIAL_SETUP/","title":"\u26a1 Quick Start Guide","text":"

Get NetAlertX up and running in a few simple steps.

"},{"location":"INITIAL_SETUP/#1-configure-scanner-plugins","title":"1. Configure Scanner Plugin(s)","text":"

Tip

Enable additional plugins under Settings \u2192 LOADED_PLUGINS. Make sure to save your changes and reload the page to activate them.

Initial configuration: ARPSCAN, INTRNT

Note

ARPSCAN and INTRNT scan the current network. You can complement them with other \ud83d\udd0d dev scanner plugins like NMAPDEV, or import devices using \ud83d\udce5 importer plugins. See the Subnet & VLAN Setup Guide and Remote Networks for advanced configurations.

"},{"location":"INITIAL_SETUP/#2-choose-a-publisher-plugin","title":"2. Choose a Publisher Plugin","text":"

Initial configuration: SMTP

Note

Configure your SMTP settings or enable additional \u25b6\ufe0f publisher plugins to send alerts. For more flexibility, try \ud83d\udcda _publisher_apprise, which supports over 80 notification services.

"},{"location":"INITIAL_SETUP/#3-set-up-a-network-topology-diagram","title":"3. Set Up a Network Topology Diagram","text":"

Initial configuration: The app auto-selects a root node (MAC internet) and attempts to identify other network devices by vendor or name.

Note

Visualize and manage your network using the Network Guide. Some plugins (e.g., UNFIMP) build the topology automatically, or you can use Custom Workflows to generate it based on your own rules.

"},{"location":"INITIAL_SETUP/#4-configure-notifications","title":"4. Configure Notifications","text":"

Initial configuration: Notifies on new_devices, down_devices, and events as defined in NTFPRCS_INCLUDED_SECTIONS.

Note

Notification settings support global, plugin-specific, and per-device rules. For fine-tuning, refer to the Notification Guide.

"},{"location":"INITIAL_SETUP/#5-set-up-workflows","title":"5. Set Up Workflows","text":"

Initial configuration: N/A

Note

Automate responses to device status changes, group management, topology updates, and more. See the Workflows Guide to simplify your network operations.

"},{"location":"INITIAL_SETUP/#6-backup-your-configuration","title":"6. Backup Your Configuration","text":"

Initial configuration: The CSVBCKP plugin creates a daily backup to /config/devices.csv.

Note

For a complete backup strategy, follow the Backup Guide.

"},{"location":"INITIAL_SETUP/#7-optional-create-custom-plugins","title":"7. (Optional) Create Custom Plugins","text":"

Initial configuration: N/A

Note

Build your own scanner, importer, or publisher plugin. See the Plugin Development Guide and included video tutorials.

"},{"location":"INITIAL_SETUP/#recommended-guides","title":"\ud83d\udcc1 Recommended Guides","text":"
  • \ud83d\udcd8 PiHole Setup Guide
  • \ud83d\udcd8 CSV Import Method
  • \ud83d\udcd8 Community Guides (Chinese, Korean, German, French)
"},{"location":"INITIAL_SETUP/#troubleshooting-help","title":"\ud83d\udee0\ufe0f Troubleshooting & Help","text":"

Before opening a new issue:

  • \ud83d\udcd8 Common Issues
  • \ud83e\uddf0 Debugging Tips
  • \u2705 Browse resolved GitHub issues

Let me know if you want a condensed README version, separate pages for each section, or UI copy based on this!

"},{"location":"INSTALLATION/","title":"Installation","text":""},{"location":"INSTALLATION/#installation-options","title":"Installation options","text":"

NetAlertX can be installed several ways. The best supported option is Docker, followed by a supervised Home Assistant instance, as an Unraid app, and lastly, on bare metal.

  • [Installation] Docker (recommended)
  • [Installation] Home Assistant
  • [Installation] Unraid App
  • [Installation] Bare metal (experimental - looking for maintainers)
"},{"location":"INSTALLATION/#help","title":"Help","text":"

If facing issues, please spend a few minutes seraching.

  • Check common issues
  • Have a look at Community guides
  • Search closed or open issues or discussions
  • Check Discord

Note

If you can't find a solution anywhere, ask in Discord if you think it's a quick question, otherwise open a new issue. Please fill in as much as possible to speed up the help process.

"},{"location":"LOGGING/","title":"Logging","text":"

NetAlertX comes with several logs that help to identify application issues. These include nginx logs, app, or plugin logs. For plugin-specific log debugging, please read the Debug Plugins guide.

Note

When debugging any issue, increase the LOG_LEVEL Setting as per the Debug tips documentation.

"},{"location":"LOGGING/#main-logs","title":"Main logs","text":"

You can find most of the logs exposed in the UI under Maintenance -> Logs.

If the UI is inaccessible, you can access them under /tmp/log.

In the Maintennace -> Logs you can Purge logs, download the full log file or Filter the lines with some substring to narrow down your search.

"},{"location":"LOGGING/#plugin-logging","title":"Plugin logging","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/). These files are processed at the end of the scan and deleted on successful processing.

The data is in most of the cases then displayed in the application under Integrations -> Plugins (or Device -> Plugins if the plugin is supplying device-specific data).

"},{"location":"LOGGING/#viewing-logs-on-the-file-system","title":"Viewing Logs on the File System","text":"

You cannot find any log files on the filesystem. The container is read-only and writes logs to a temporary in-memory filesystem (tmpfs) for security and performance. The application follows container best-practices by writing all logs to the standard output (stdout) and standard error (stderr) streams. Docker's logging driver (set in docker-compose.yml) captures this stream automatically, allowing you to access it with the docker logs <image_name> command.

  • To see all logs since the last restart:

bash docker logs netalertx * To watch the logs live (live feed):

bash docker logs -f netalertx

"},{"location":"LOGGING/#enabling-persistent-file-based-logs","title":"Enabling Persistent File-Based Logs","text":"

The default logs are erased every time the container restarts because they are stored in temporary in-memory storage (tmpfs). If you need to keep a persistent, file-based log history, follow the steps below.

Note

This might lead to performance degradation so this approach is only suggested when actively debugging issues. See the Performance optimization documentation for details.

  1. Stop the container:

bash docker-compose down

  1. Edit your docker-compose.yml file:

  2. Comment out the /tmp/log line under the tmpfs: section.

  3. Uncomment the \"Retain logs\" line under the volumes: section and set your desired host path.

yaml ... tmpfs: # - \"/tmp/log:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\" ... volumes: ... # Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts - /home/adam/netalertx_logs:/tmp/log ... 3. Restart the container:

bash docker-compose up -d

This change stops Docker from mounting a temporary in-memory volume at /tmp/log. Instead, it \"bind mounts\" a persistent folder from your host computer (e.g., /data/netalertx_logs) to that same location inside the container.

"},{"location":"MIGRATION/","title":"Migration","text":"

When upgrading from older versions of NetAlertX (or PiAlert by jokob-sk), follow the migration steps below to ensure your data and configuration are properly transferred.

Tip

It's always important to have a backup strategy in place.

"},{"location":"MIGRATION/#migration-scenarios","title":"Migration scenarios","text":"
  • You are running PiAlert (by jokob-sk) \u2192 Read the 1.1 Migration from PiAlert to NetAlertX v25.5.24

  • You are running NetAlertX (by jokob-sk) 25.5.24 or older \u2192 Read the 1.2 Migration from NetAlertX v25.5.24

  • You are running NetAlertX (by jokob-sk) (v25.6.7 to v25.10.1) \u2192 Read the 1.3 Migration from NetAlertX v25.10.1

"},{"location":"MIGRATION/#10-manual-migration","title":"1.0 Manual Migration","text":"

You can migrate data manually, for example by exporting and importing devices using the CSV import method.

"},{"location":"MIGRATION/#11-migration-from-pialert-to-netalertx-v25524","title":"1.1 Migration from PiAlert to NetAlertX v25.5.24","text":""},{"location":"MIGRATION/#steps","title":"STEPS:","text":"

The application will automatically migrate the database, configuration, and all device information. A banner message will appear at the top of the web UI reminding you to update your Docker mount points.

  1. Stop the container
  2. Back up your setup
  3. Update the Docker file mount locations in your docker-compose.yml or docker run command (See below New Docker mount locations).
  4. Rename the DB and conf files to app.db and app.conf and place them in the appropriate location.
  5. Start the container

Tip

If you have trouble accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: cp -r /data/config /home/pi/pialert/config/old_backup_files. This should create a folder in the config directory called old_backup_files containing all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.

"},{"location":"MIGRATION/#new-docker-mount-locations","title":"New Docker mount locations","text":"

The internal application path in the container has changed from /home/pi/pialert to /app. Update your volume mounts as follows:

Old mount point New mount point /home/pi/pialert/config /data/config /home/pi/pialert/db /data/db

If you were mounting files directly, please note the file names have changed:

Old file name New file name pialert.conf app.conf pialert.db app.db

Note

The application automatically creates symlinks from the old database and config locations to the new ones, so data loss should not occur. Read the backup strategies guide to backup your setup.

"},{"location":"MIGRATION/#examples","title":"Examples","text":"

Examples of docker files with the new mount points.

"},{"location":"MIGRATION/#example-1-mapping-folders","title":"Example 1: Mapping folders","text":""},{"location":"MIGRATION/#old-docker-composeyml","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    image: \"jokobsk/pialert:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/home/pi/pialert/config\n      - /local_data_dir/db:/home/pi/pialert/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \ud83c\udd95 This has changed\n    container_name: netalertx                 # \ud83c\udd95 This has changed\n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This has changed\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/data/config         # \ud83c\udd95 This has changed\n      - /local_data_dir/db:/data/db                 # \ud83c\udd95 This has changed\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log        # \ud83c\udd95 This has changed\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#example-2-mapping-files","title":"Example 2: Mapping files","text":"

Note

The recommendation is to map folders as in Example 1, map files directly only when needed.

"},{"location":"MIGRATION/#old-docker-composeyml_1","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    image: \"jokobsk/pialert:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config/pialert.conf:/home/pi/pialert/config/pialert.conf\n      - /local_data_dir/db/pialert.db:/home/pi/pialert/db/pialert.db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml_1","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \ud83c\udd95 This has changed\n    container_name: netalertx                 # \ud83c\udd95 This has changed\n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This has changed\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config/app.conf:/data/config/app.conf # \ud83c\udd95 This has changed\n      - /local_data_dir/db/app.db:/data/db/app.db             # \ud83c\udd95 This has changed\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log                  # \ud83c\udd95 This has changed\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#12-migration-from-netalertx-v25524","title":"1.2 Migration from NetAlertX v25.5.24","text":"

Versions before v25.10.1 require an intermediate migration through v25.5.24 to ensure database compatibility. Skipping this step may cause compatibility issues due to database schema changes introduced after v25.5.24.

"},{"location":"MIGRATION/#steps_1","title":"STEPS:","text":"
  1. Stop the container
  2. Back up your setup
  3. Upgrade to v25.5.24 by pinning the release version (See Examples below)
  4. Start the container and verify everything works as expected.
  5. Stop the container
  6. Upgrade to v25.10.1 by pinning the release version (See Examples below)
  7. Start the container and verify everything works as expected.
"},{"location":"MIGRATION/#examples_1","title":"Examples","text":"

Examples of docker files with the tagged version.

"},{"location":"MIGRATION/#example-1-mapping-folders_1","title":"Example 1: Mapping folders","text":""},{"location":"MIGRATION/#docker-composeyml-changes","title":"docker-compose.yml changes","text":"
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This is important\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/data/config\n      - /local_data_dir/db:/data/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:25.10.1\"         # \ud83c\udd95 This is important\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/data/config\n      - /local_data_dir/db:/data/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#13-migration-from-netalertx-v25101","title":"1.3 Migration from NetAlertX v25.10.1","text":"

Starting from v25.10.1, the container uses a more secure, read-only runtime environment, which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as tmpfs or permanent writable volumes, with sufficient access permissions. The data location has also hanged from /app/db and /app/config to /data/db and /data/config. See detailed steps below.

"},{"location":"MIGRATION/#steps_2","title":"STEPS:","text":"
  1. Stop the container
  2. Back up your setup
  3. Upgrade to v25.10.1 by pinning the release version (See the example below)
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:25.10.1\"         # \ud83c\udd95 This is important\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/app/config\n      - /local_data_dir/db:/app/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
  1. Start the container and verify everything works as expected.
  2. Stop the container.
  3. Update the docker-compose.yml as per example below.
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx\"  # \ud83c\udd95 This has changed\n    network_mode: \"host\"\n    cap_drop:                # \ud83c\udd95 New line\n      - ALL                  # \ud83c\udd95 New line\n    cap_add:                 # \ud83c\udd95 New line\n      - NET_RAW              # \ud83c\udd95 New line\n      - NET_ADMIN            # \ud83c\udd95 New line\n      - NET_BIND_SERVICE     # \ud83c\udd95 New line\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir:/data  # \ud83c\udd95 This folder contains your /db and /config directories and the parent changed from /app to /data\n      # Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured\n      - /etc/localtime:/etc/localtime:ro    # \ud83c\udd95 New line\n    environment:\n      - PORT=20211\n    # \ud83c\udd95 New \"tmpfs\" section START \ud83d\udd3d\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n    # \ud83c\udd95 New \"tmpfs\" section END  \ud83d\udd3c\n
  1. Perform a one-off migration to the latest netalertx image and 20211 user.

Note

The examples below assumes your /config and /db folders are stored in local_data_dir. Replace this path with your actual configuration directory. netalertx is the container name, which might differ from your setup.

Automated approach:

Run the container with the --user \"0\" parameter. Please note, some systems will require the manual approach below.

docker run -it --rm --name netalertx --user \"0\" \\\n  -v /local_data_dir/config:/app/config \\\n  -v /local_data_dir/db:/app/db \\\n  -v /local_data_dir:/data \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  ghcr.io/jokob-sk/netalertx:latest\n

Stop the container and run it as you would normally.

Manual approach:

Use the manual approach if the Automated approach fails. Execute the below commands:

sudo chown -R 20211:20211 /local_data_dir\nsudo chmod -R a+rwx /local_data_dir\n
  1. Start the container and verify everything works as expected.
"},{"location":"NAME_RESOLUTION/","title":"Device Name Resolution","text":"

Name resolution in NetAlertX relies on multiple plugins to resolve device names from IP addresses. If you are seeing (name not found) as device names, follow these steps to diagnose and fix the issue.

Tip

Before proceeding, make sure Reverse DNS is enabled on your network. You can control how names are handled and cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

"},{"location":"NAME_RESOLUTION/#required-plugins","title":"Required Plugins","text":"

For best results, ensure the following name resolution plugins are enabled:

  • AVAHISCAN \u2013 Uses mDNS/Avahi to resolve local network names.
  • NBTSCAN \u2013 Queries NetBIOS to find device names.
  • NSLOOKUP \u2013 Performs standard DNS lookups.
  • DIGSCAN \u2013 Performs Name Resolution with the Dig utility (DNS).

You can check which plugins are active in your Settings section and enable any that are missing.

There are other plugins that can supply device names as well, but they rely on bespoke hardware and services. See Plugins overview for details and look for plugins with name discovery (\ud83c\udd8e) features.

"},{"location":"NAME_RESOLUTION/#checking-logs","title":"Checking Logs","text":"

If names are not resolving, check the logs for errors or timeouts.

See how to explore logs in the Logging guide.

Logs will show which plugins attempted resolution and any failures encountered.

"},{"location":"NAME_RESOLUTION/#adjusting-timeout-settings","title":"Adjusting Timeout Settings","text":"

If resolution is slow or failing due to timeouts, increase the timeout settings in your configuration, for example.

NSLOOKUP_RUN_TIMEOUT = 30\n

Raising the timeout may help if your network has high latency or slow DNS responses.

"},{"location":"NAME_RESOLUTION/#checking-plugin-objects","title":"Checking Plugin Objects","text":"

Each plugin stores results in its respective object. You can inspect these objects to see if they contain valid name resolution data.

See Logging guide and Debug plugins guides for details.

If the object contains no results, the issue may be with DNS settings or network access.

"},{"location":"NAME_RESOLUTION/#improving-name-resolution","title":"Improving name resolution","text":"

For more details how to improve name resolution refer to the Reverse DNS Documentation.

"},{"location":"NETWORK_TREE/","title":"Network Topology","text":""},{"location":"NETWORK_TREE/#how-to-set-up-your-network-page","title":"How to Set Up Your Network Page","text":"

The Network page lets you map how devices connect \u2014 visually and logically. It\u2019s especially useful for planning infrastructure, assigning parent-child relationships, and spotting gaps.

To get started, you\u2019ll need to define at least one root node and mark certain devices as network nodes (like Switches or Routers).

Start by creating a root device with the MAC address Internet, if the application didn\u2019t create one already. This special MAC address (Internet) is required for the root network node \u2014 no other value is currently supported. Set its Type to a valid network type \u2014 such as Router or Gateway.

Tip

If you don\u2019t have one, use the Create new device button on the Devices page to add a root device.

"},{"location":"NETWORK_TREE/#quick-setup","title":"\u26a1 Quick Setup","text":"
  1. Open the device you want to use as a network node (e.g. a Switch).
  2. Set its Type to one of the following: AP, Firewall, Gateway, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN (Or add custom types under Settings \u2192 General \u2192 NETWORK_DEVICE_TYPES.)
  3. Save the device.
  4. Go to the Network page \u2014 supported device types will appear as tabs.
  5. Use the Assign button to connect unassigned devices to a network node.
  6. If the Port is 0 or empty, a Wi-Fi icon is shown. Otherwise, an Ethernet icon appears.

Note

Use bulk editing with CSV Export to fix Internet root assignments or update many devices at once.

"},{"location":"NETWORK_TREE/#example-setting-up-a-raspberrypi-as-a-switch","title":"Example: Setting up a raspberrypi as a Switch","text":"

Let\u2019s walk through setting up a device named raspberrypi to act as a network Switch that other devices connect through.

"},{"location":"NETWORK_TREE/#1-set-device-type-and-parent","title":"1. Set Device Type and Parent","text":"
  • Go to the Devices page
  • Open the device detail view for raspberrypi
  • In the Type dropdown, select Switch
  • Optionally assign a Parent Node (where this device connects to) and the Relationship type of the connection. The nic relationship type can affect parent notifications \u2014 see the setting description and Notifications documentation for more.
  • A device\u2019s parent MAC will be overwritten by plugins if its current value is any of the following: \"null\", \"(unknown)\" \"(Unknown)\".
  • If you want plugins to be able to overwrite the parent value (for example, when mixing plugins that do not provide parent MACs like ARPSCAN with those that do, like UNIFIAPI), you must set the setting NEWDEV_devParentMAC to None.

Note

Only certain device types can act as network nodes: AP, Firewall, Gateway, Hypervisor, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN You can add custom types via the NETWORK_DEVICE_TYPES setting.

  • Click Save
"},{"location":"NETWORK_TREE/#2-confirm-the-device-appears-as-a-network-node","title":"2. Confirm The Device Appears as a Network Node","text":"

You can confirm that raspberrypi now acts as a network device in two places:

  • Navigate to a different device and verify that raspberrypi now appears as an option for a Parent Node:

  • Go to the Network page \u2014 you'll now see a raspberrypi tab, meaning it's recognized as a network node (Switch):

  • You can now assign other devices to it.
"},{"location":"NETWORK_TREE/#3-assign-connected-devices","title":"3. Assign Connected Devices","text":"
  • Use the Assign button to link other devices (e.g. PCs) to raspberrypi.
  • After assigning, connected devices will appear beneath the raspberrypi switch node.
  • Relationship lines may vary in color based on the selected Relationship type. These are editable on the device details page where you can also assign a parent node.

Hovering over devices in the tree reveals connection details and tooltips for quick inspection.

Note

Selecting certain relationship types hides the device in the default device views. You can change this behavior by adjusting the UI_hide_rel_types setting, which by default is set to [\"nic\",\"virtual\"]. This means devices with devParentRelType set to nic or virtual will not be shown. All devices, regardless of relationship type, are always accessible in the All devices view.

"},{"location":"NETWORK_TREE/#summary","title":"\u2705 Summary","text":"

To configure devices on the Network page:

  • Ensure a device with MAC Internet is set up as the root
  • Assign valid Type values to switches, routers, and other supported nodes that represent network devices
  • Use the Assign button to connect devices logically to their parent node

Need to reset or undo changes? Use backups or bulk editing to manage devices at scale. You can also automate device assignment with Workflows.

"},{"location":"NOTIFICATIONS/","title":"Notifications \ud83d\udce7","text":"

There are 4 ways how to influence notifications:

  1. On the device itself
  2. On the settings of the plugin
  3. Globally
  4. Ignoring devices

Note

It's recommended to use the same schedule interval for all plugins responsible for scanning devices, otherwise false positives might be reported if different devices are discovered by different plugins. Check the Settings > Enabled settings section for a warning:

"},{"location":"NOTIFICATIONS/#device-settings","title":"Device settings \ud83d\udcbb","text":"

The following device properties influence notifications. You can:

  1. Alert Events - Enables alerts of connections, disconnections, IP changes (down and down reconnected notifications are still sent even if this is disabled).
  2. Alert Down - Alerts when a device goes down. This setting overrides a disabled Alert Events setting, so you will get a notification of a device going down even if you don't have Alert Events ticked. Disabling this will disable down and down reconnected notifications on the device.
  3. Skip repeated notifications, if for example you know there is a temporary issue and want to pause the same notification for this device for a given time.
  4. Require NICs Online - Indicates whether this device should be considered online only if all associated NICs (devices with the nic relationship type) are online. If disabled, the device is considered online if any NIC is online. If a NIC is online it sets the parent (this) device's status to online irrespectivelly of the detected device's status. The Relationship type is set on the childern device.

Note

Please read through the NTFPRCS plugin documentation to understand how device and global settings influence the notification processing.

"},{"location":"NOTIFICATIONS/#plugin-settings","title":"Plugin settings \ud83d\udd0c","text":"

On almost all plugins there are 2 core settings, <plugin>_WATCH and <plugin>_REPORT_ON.

  1. <plugin>_WATCH specifies the columns which the app should watch. If watched columns change the device state is considered changed. This changed status is then used to decide to send out notifications based on the <plugin>_REPORT_ON setting.
  2. <plugin>_REPORT_ON let's you specify on which events the app should notify you. This is related to the <plugin>_WATCH setting. So if you select watched-changed and in <plugin>_WATCH you only select Watched_Value1, then a notification is triggered if Watched_Value1 is changed from the previous value, but no notification is send if Watched_Value2 changes.

Click the Read more in the docs. Link at the top of each plugin to get more details on how the given plugin works.

"},{"location":"NOTIFICATIONS/#global-settings","title":"Global settings \u2699","text":"

In Notification Processing settings, you can specify blanket rules. These allow you to specify exceptions to the Plugin and Device settings and will override those.

  1. Notify on (NTFPRCS_INCLUDED_SECTIONS) allows you to specify which events trigger notifications. Usual setups will have new_devices, down_devices, and possibly down_reconnected set. Including plugin (dependenton the Plugin <plugin>_WATCH and <plugin>_REPORT_ON settings) and events (dependent on the on-device Alert Events setting) might be too noisy for most setups. More info in the NTFPRCS plugin on what events these selections include.
  2. Alert down after (NTFPRCS_alert_down_time) is useful if you want to wait for some time before the system sends out a down notification for a device. This is related to the on-device Alert down setting and only devices with this checked will trigger a down notification.

You can filter out unwanted notifications globally. This could be because of a misbehaving device (GoogleNest/GoogleHub (See also ARPSAN docs and the --exclude-broadcast flag)) which flips between IP addresses, or because you want to ignore new device notifications of a certain pattern.

  1. Events Filter (NTFPRCS_event_condition) - Filter out Events from notifications.
  2. New Devices Filter (NTFPRCS_new_dev_condition) - Filter out New Devices from notifications, but log and keep a new device in the system.
"},{"location":"NOTIFICATIONS/#ignoring-devices","title":"Ignoring devices \ud83d\udcbb","text":"

You can completely ignore detected devices globally. This could be because your instance detects docker containers, you want to ignore devices from a specific manufacturer via MAC rules or you want to ignore devices on a specific IP range.

  1. Ignored MACs (NEWDEV_ignored_MACs) - List of MACs to ignore.
  2. Ignored IPs (NEWDEV_ignored_IPs) - List of IPs to ignore.
"},{"location":"PERFORMANCE/","title":"Performance Optimization Guide","text":"

There are several ways to improve the application's performance. The application has been tested on a range of devices, from Raspberry Pi 4 units to NAS and NUC systems. If you are running the application on a lower-end device, fine-tuning the performance settings can significantly improve the user experience.

"},{"location":"PERFORMANCE/#common-causes-of-slowness","title":"Common Causes of Slowness","text":"

Performance issues are usually caused by:

  • Incorrect settings \u2013 The app may restart unexpectedly. Check app.log under Maintenance \u2192 Logs for details.
  • Too many background processes \u2013 Disable unnecessary scanners.
  • Long scan durations \u2013 Limit the number of scanned devices.
  • Excessive disk operations \u2013 Optimize scanning and logging settings.
  • Maintenance plugin failures \u2013 If cleanup tasks fail, performance can degrade over time.

The application performs regular maintenance and database cleanup. If these tasks are failing, you will see slowdowns.

"},{"location":"PERFORMANCE/#database-and-log-file-size","title":"Database and Log File Size","text":"

A large database or oversized log files can impact performance. You can check database and table sizes on the Maintenance page.

Note

  • For ~100 devices, the database should be around 50 MB.
  • No table should exceed 10,000 rows in a healthy system.
  • Actual values vary based on network activity and plugin settings.
"},{"location":"PERFORMANCE/#maintenance-plugins","title":"Maintenance Plugins","text":"

Two plugins help maintain the system\u2019s performance:

"},{"location":"PERFORMANCE/#1-database-cleanup-dbclnp","title":"1. Database Cleanup (DBCLNP)","text":"
  • Handles database maintenance and cleanup.
  • See the DB Cleanup Plugin Docs.
  • Ensure it\u2019s not failing by checking logs.
  • Adjust the schedule (DBCLNP_RUN_SCHD) and timeout (DBCLNP_RUN_TIMEOUT) if necessary.
"},{"location":"PERFORMANCE/#2-maintenance-maint","title":"2. Maintenance (MAINT)","text":"
  • Cleans logs and performs general maintenance tasks.
  • See the Maintenance Plugin Docs.
  • Verify proper operation via logs.
  • Adjust the schedule (MAINT_RUN_SCHD) and timeout (MAINT_RUN_TIMEOUT) if needed.
"},{"location":"PERFORMANCE/#scan-frequency-and-coverage","title":"Scan Frequency and Coverage","text":"

Frequent scans increase resource usage, network traffic, and database read/write cycles.

"},{"location":"PERFORMANCE/#optimizations","title":"Optimizations","text":"
  • Increase scan intervals (<PLUGIN>_RUN_SCHD) on busy networks or low-end hardware.
  • Increase timeouts (<PLUGIN>_RUN_TIMEOUT) to avoid plugin failures.
  • Reduce subnet size \u2013 e.g., use /24 instead of /16 to reduce scan load.

Some plugins also include options to limit which devices are scanned. If certain plugins consistently run long, consider narrowing their scope.

For example, the ICMP plugin allows scanning only IPs that match a specific regular expression.

"},{"location":"PERFORMANCE/#storing-temporary-files-in-memory","title":"Storing Temporary Files in Memory","text":"

On devices with slower I/O, you can improve performance by storing temporary files (and optionally the database) in memory using tmpfs.

Warning

Storing the database in tmpfs is generally discouraged. Use this only if device data and historical records are not required to persist. If needed, you can pair this setup with the SYNC plugin to store important persistent data on another node. See the Plugins docs for details.

Using tmpfs reduces disk writes and speeds up I/O, but all data stored in memory will be lost on restart.

Below is an optimized docker-compose.yml snippet using non-persistent logs, API data, and DB:

services:\n  netalertx:\n    container_name: netalertx\n    # Use this line for the stable release\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    # Or use this line for the latest development build\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n\n    cap_drop:       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:        # Re-add necessary capabilities\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n\n    volumes:\n      - ${APP_FOLDER}/netalertx/config:/data/config\n      - /etc/localtime:/etc/localtime:ro\n\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      - \"/data/db:uid=20211,gid=20211,mode=1700\"  # \u26a0 You will lose historical data on restart\n\n    environment:\n      - PORT=${PORT}\n      - APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}\n
"},{"location":"PIHOLE_GUIDE/","title":"Integration with PiHole","text":"

NetAlertX comes with 3 plugins suitable for integrating with your existing PiHole instance. The first plugin uses the v6 API, the second plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine multiple approaches and also supplement scans with other plugins.

"},{"location":"PIHOLE_GUIDE/#approach-1-piholeapi-plugin-import-devices-directly-from-pihole-v6-api","title":"Approach 1: PIHOLEAPI Plugin - Import devices directly from PiHole v6 API","text":"

To use this approach make sure the Web UI password in Pi-hole is set.

Setting Description Recommended value PIHOLEAPI_URL Your Pi-hole base URL including port. http://192.168.1.82:9880/ PIHOLEAPI_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * PIHOLEAPI_PASSWORD The Web UI base64 encoded (en-/decoding handled by the app) admin password. passw0rd PIHOLEAPI_SSL_VERIFY Whether to verify HTTPS certificates. Disable only for self-signed certificates. False PIHOLEAPI_API_MAXCLIENTS Maximum number of devices to request from Pi-hole. Defaults are usually fine. 500 PIHOLEAPI_FAKE_MAC Generate FAKE MAC from IP. False

Check the PiHole API plugin readme for details and troubleshooting.

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes","title":"docker-compose changes","text":"

No changes needed

"},{"location":"PIHOLE_GUIDE/#approach-2-dhcplss-plugin-import-devices-from-the-pihole-dhcp-leases-file","title":"Approach 2: DHCPLSS Plugin - Import devices from the PiHole DHCP leases file","text":""},{"location":"PIHOLE_GUIDE/#settings","title":"Settings","text":"Setting Description Recommended value DHCPLSS_RUN When the plugin should run. schedule DHCPLSS_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * DHCPLSS_paths_to_check You need to map the value in this setting in the docker-compose.yml file. The in-container path must contain pihole so it's parsed correctly. ['/etc/pihole/dhcp.leases']

Check the DHCPLSS plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes_1","title":"docker-compose changes","text":"Path Description :/etc/pihole/dhcp.leases PiHole's dhcp.leases file. Required if you want to use PiHole dhcp.leases file. This has to be matched with a corresponding DHCPLSS_paths_to_check setting entry (the path in the container must contain pihole)"},{"location":"PIHOLE_GUIDE/#approach-3-pihole-plugin-import-devices-directly-from-the-pihole-database","title":"Approach 3: PIHOLE Plugin - Import devices directly from the PiHole database","text":"Setting Description Recommended value PIHOLE_RUN When the plugin should run. schedule PIHOLE_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * PIHOLE_DB_PATH You need to map the value in this setting in the docker-compose.yml file. /etc/pihole/pihole-FTL.db

Check the PiHole plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes_2","title":"docker-compose changes","text":"Path Description :/etc/pihole/pihole-FTL.db PiHole's pihole-FTL.db database file.

Check out other plugins that can help you discover more about your network or check how to scan Remote networks.

"},{"location":"PLUGINS/","title":"\ud83d\udd0c Plugins","text":"

NetAlertX supports additional plugins to extend its functionality, each with its own settings and options. Plugins can be loaded via the General -> LOADED_PLUGINS setting. For custom plugin development, refer to the Plugin development guide.

Note

Please check this Plugins debugging guide and the corresponding Plugin documentation in the below table if you are facing issues.

"},{"location":"PLUGINS/#quick-start","title":"\u26a1 Quick start","text":"

Tip

You can load additional Plugins via the General -> LOADED_PLUGINS setting. You need to save the settings for the new plugins to load (cache/page reload may be necessary).

  1. Pick your \ud83d\udd0d dev scanner plugin (e.g. ARPSCAN or NMAPDEV), or import devices into the application with an \ud83d\udce5 importer plugin. (See Enabling plugins below)
  2. Pick a \u25b6\ufe0f publisher plugin, if you want to send notifications. If you don't see a publisher you'd like to use, look at the \ud83d\udcda_publisher_apprise plugin which is a proxy for over 80 notification services.
  3. Setup your Network topology diagram
  4. Fine-tune Notifications
  5. Setup Workflows
  6. Backup your setup
  7. Contribute and Create custom plugins
"},{"location":"PLUGINS/#plugin-types","title":"Plugin types","text":"Plugin type Icon Description When to run Required Data source ? publisher \u25b6\ufe0f Sending notifications to services. on_notification \u2716 Script dev scanner \ud83d\udd0d Create devices in the app, manages online/offline device status. schedule \u2716 Script / SQLite DB name discovery \ud83c\udd8e Discovers names of devices via various protocols. before_name_updates, schedule \u2716 Script importer \ud83d\udce5 Importing devices from another service. schedule \u2716 Script / SQLite DB system \u2699 Providing core system functionality. schedule / always on \u2716/\u2714 Script / Template other \u267b Other plugins misc \u2716 Script / Template"},{"location":"PLUGINS/#features","title":"Features","text":"Icon Description \ud83d\udda7 Auto-imports the network topology diagram \ud83d\udd04 Has the option to sync some data back into the plugin source"},{"location":"PLUGINS/#available-plugins","title":"Available Plugins","text":"

Device-detecting plugins insert values into the CurrentScan database table. The plugins that are not required are safe to ignore, however, it makes sense to have at least some device-detecting plugins enabled, such as ARPSCAN or NMAPDEV.

ID Plugin docs Type Description Features Required APPRISE _publisher_apprise \u25b6\ufe0f Apprise notification proxy ARPSCAN arp_scan \ud83d\udd0d ARP-scan on current network AVAHISCAN avahi_scan \ud83c\udd8e Avahi (mDNS-based) name resolution ASUSWRT asuswrt_import \ud83d\udd0d Import connected devices from AsusWRT CSVBCKP csv_backup \u2699 CSV devices backup CUSTPROP custom_props \u2699 Managing custom device properties values Yes DBCLNP db_cleanup \u2699 Database cleanup Yes* DDNS ddns_update \u2699 DDNS update DHCPLSS dhcp_leases \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e Import devices from DHCP leases DHCPSRVS dhcp_servers \u267b DHCP servers DIGSCAN dig_scan \ud83c\udd8e Dig (DNS) Name resolution FREEBOX freebox \ud83d\udd0d/\u267b/\ud83c\udd8e Pull data and names from Freebox/Iliadbox ICMP icmp_scan \u267b ICMP (ping) status checker INTRNT internet_ip \ud83d\udd0d Internet IP scanner INTRSPD internet_speedtest \u267b Internet speed test IPNEIGH ipneigh \ud83d\udd0d Scan ARP (IPv4) and NDP (IPv6) tables LUCIRPC luci_import \ud83d\udd0d Import connected devices from OpenWRT MAINT maintenance \u2699 Maintenance of logs, etc. MQTT _publisher_mqtt \u25b6\ufe0f MQTT for synching to Home Assistant MTSCAN mikrotik_scan \ud83d\udd0d Mikrotik device import & sync NBTSCAN nbtscan_scan \ud83c\udd8e Nbtscan (NetBIOS-based) name resolution NEWDEV newdev_template \u2699 New device template Yes NMAP nmap_scan \u267b Nmap port scanning & discovery NMAPDEV nmap_dev_scan \ud83d\udd0d Nmap dev scan on current network NSLOOKUP nslookup_scan \ud83c\udd8e NSLookup (DNS-based) name resolution NTFPRCS notification_processing \u2699 Notification processing Yes NTFY _publisher_ntfy \u25b6\ufe0f NTFY notifications OMDSDN omada_sdn_imp \ud83d\udce5/\ud83c\udd8e \u274c UNMAINTAINED use OMDSDNOPENAPI \ud83d\udda7 \ud83d\udd04 OMDSDNOPENAPI omada_sdn_openapi \ud83d\udce5/\ud83c\udd8e OMADA TP-Link import via OpenAPI \ud83d\udda7 PIHOLE pihole_scan \ud83d\udd0d/\ud83c\udd8e/\ud83d\udce5 Pi-hole device import & sync PIHOLEAPI pihole_api_scan \ud83d\udd0d/\ud83c\udd8e/\ud83d\udce5 Pi-hole device import & sync via API v6+ PUSHSAFER _publisher_pushsafer \u25b6\ufe0f Pushsafer notifications PUSHOVER _publisher_pushover \u25b6\ufe0f Pushover notifications SETPWD set_password \u2699 Set password Yes SMTP _publisher_email \u25b6\ufe0f Email notifications SNMPDSC snmp_discovery \ud83d\udd0d/\ud83d\udce5 SNMP device import & sync SYNC sync \ud83d\udd0d/\u2699/\ud83d\udce5 Sync & import from NetAlertX instances \ud83d\udda7 \ud83d\udd04 Yes TELEGRAM _publisher_telegram \u25b6\ufe0f Telegram notifications UI ui_settings \u267b UI specific settings Yes UNFIMP unifi_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import & sync \ud83d\udda7 UNIFIAPI unifi_api_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import (SM API, multi-site) VNDRPDT vendor_update \u2699 Vendor database update WEBHOOK _publisher_webhook \u25b6\ufe0f Webhook notifications WEBMON website_monitor \u267b Website down monitoring WOL wake_on_lan \u267b Automatic wake-on-lan

* The database cleanup plugin (DBCLNP) is not required but the app will become unusable after a while if not executed. \u274c marked for removal/unmaintained - looking for help \u231aIt's recommended to use the same schedule interval for all plugins responsible for discovering new devices.

"},{"location":"PLUGINS/#enabling-plugins","title":"Enabling plugins","text":"

Plugins can be enabled via Settings, and can be disabled as needed.

  1. Research which plugin you'd like to use, enable DISCOVER_PLUGINS and load the required plugins in Settings via the LOADED_PLUGINS setting.
  2. Save the changes and review the Settings of the newly loaded plugins.
  3. Change the <prefix>_RUN Setting to the recommended or custom value as per the documentation of the given setting
    • If using schedule on a \ud83d\udd0d dev scanner plugin, make sure the schedules are the same across all \ud83d\udd0d dev scanner plugins
"},{"location":"PLUGINS/#disabling-unloading-and-ignoring-plugins","title":"Disabling, Unloading and Ignoring plugins","text":"
  1. Change the <prefix>_RUN Setting to disabled if you want to disable the plugin, but keep the settings
  2. If you want to speed up the application, you can unload the plugin by unselecting it in the LOADED_PLUGINS setting.
    • Careful, once you save the Settings Unloaded plugin settings will be lost (old app.conf files are kept in the /config folder)
  3. You can completely ignore plugins by placing a ignore_plugin file into the plugin directory. Ignored plugins won't show up in the LOADED_PLUGINS setting.
"},{"location":"PLUGINS/#developing-new-custom-plugins","title":"\ud83c\udd95 Developing new custom plugins","text":"

If you want to develop a custom plugin, please read this Plugin development guide.

"},{"location":"PLUGINS_DEV/","title":"Creating a custom plugin","text":"

NetAlertX comes with a plugin system to feed events from third-party scripts into the UI and then send notifications, if desired. The highlighted core functionality this plugin system supports, is:

  • dynamic creation of a simple UI to interact with the discovered objects,
  • filtering of displayed values in the Devices UI
  • surface settings of plugins in the UI,
  • different column types for reported values to e.g. link back to a device
  • import objects into existing NetAlertX database tables

(Currently, update/overwriting of existing objects is only supported for devices via the CurrentScan table.)

Note

For a high-level overview of how the config.json is used and it's lifecycle check the config.json Lifecycle in NetAlertX Guide.

"},{"location":"PLUGINS_DEV/#watch-the-video","title":"\ud83c\udfa5 Watch the video:","text":"

Tip

Read this guide Development environment setup guide to set up your local environment for development. \ud83d\udc69\u200d\ud83d\udcbb

"},{"location":"PLUGINS_DEV/#screenshots","title":"\ud83d\udcf8 Screenshots","text":""},{"location":"PLUGINS_DEV/#use-cases","title":"Use cases","text":"

Example use cases for plugins could be:

  • Monitor a web service and alert me if it's down
  • Import devices from dhcp.leases files instead/complementary to using PiHole or arp-scans
  • Creating ad-hoc UI tables from existing data in the NetAlertX database, e.g. to show all open ports on devices, to list devices that disconnected in the last hour, etc.
  • Using other device discovery methods on the network and importing the data as new devices
  • Creating a script to create FAKE devices based on user input via custom settings
  • ...at this point the limitation is mostly the creativity rather than the capability (there might be edge cases and a need to support more form controls for user input off custom settings, but you probably get the idea)

If you wish to develop a plugin, please check the existing plugin structure. Once the settings are saved by the user they need to be removed from the app.conf file manually if you want to re-initialize them from the config.json of the plugin.

"},{"location":"PLUGINS_DEV/#disclaimer","title":"\u26a0 Disclaimer","text":"

Please read the below carefully if you'd like to contribute with a plugin yourself. This documentation file might be outdated, so double-check the sample plugins as well.

"},{"location":"PLUGINS_DEV/#plugin-file-structure-overview","title":"Plugin file structure overview","text":"

\u26a0\ufe0fFolder name must be the same as the code name value in: \"code_name\": \"<value>\" Unique prefix needs to be unique compared to the other settings prefixes, e.g.: the prefix APPRISE is already in use.

File Required (plugin type) Description config.json yes Contains the plugin configuration (manifest) including the settings available to the user. script.py no The Python script itself. You may call any valid linux command. last_result.<prefix>.log no The file used to interface between NetAlertX and the plugin. Required for a script plugin if you want to feed data into the app. Stored in the /api/log/plugins/ script.log no Logging output (recommended) README.md yes Any setup considerations or overview

More on specifics below.

"},{"location":"PLUGINS_DEV/#column-order-and-values-plugins-interface-contract","title":"Column order and values (plugins interface contract)","text":"

Important

Spend some time reading and trying to understand the below table. This is the interface between the Plugins and the core application. The application expets 9 or 13 values The first 9 values are mandatory. The next 4 values (HelpVal1 to HelpVal4) are optional. However, if you use any of these optional values (e.g., HelpVal1), you need to supply all optional values (e.g., HelpVal2, HelpVal3, and HelpVal4). If a value is not used, it should be padded with null.

Order Represented Column Value Required Description 0 Object_PrimaryID yes The primary ID used to group Events under. 1 Object_SecondaryID no Optional secondary ID to create a relationship beween other entities, such as a MAC address 2 DateTime yes When the event occured in the format 2023-01-02 15:56:30 3 Watched_Value1 yes A value that is watched and users can receive notifications if it changed compared to the previously saved entry. For example IP address 4 Watched_Value2 no As above 5 Watched_Value3 no As above 6 Watched_Value4 no As above 7 Extra no Any other data you want to pass and display in NetAlertX and the notifications 8 ForeignKey no A foreign key that can be used to link to the parent object (usually a MAC address) 9 HelpVal1 no (optional) A helper value 10 HelpVal2 no (optional) A helper value 11 HelpVal3 no (optional) A helper value 12 HelpVal4 no (optional) A helper value

Note

De-duplication is run once an hour on the Plugins_Objects database table and duplicate entries with the same value in columns Object_PrimaryID, Object_SecondaryID, Plugin (auto-filled based on unique_prefix of the plugin), UserData (can be populated with the \"type\": \"textbox_save\" column type) are removed.

"},{"location":"PLUGINS_DEV/#configjson-structure","title":"config.json structure","text":"

The config.json file is the manifest of the plugin. It contains mainly settings definitions and the mapping of Plugin objects to NetAlertX objects.

"},{"location":"PLUGINS_DEV/#execution-order","title":"Execution order","text":"

The execution order is used to specify when a plugin is executed. This is useful if a plugin has access and surfaces more information than others. If a device is detected by 2 plugins and inserted into the CurrentScan table, the plugin with the higher priority (e.g.: Level_0 is a higher priority than Level_1) will insert it's values first. These values (devices) will be then prioritized over any values inserted later.

{\n    \"execution_order\" : \"Layer_0\"\n}\n
"},{"location":"PLUGINS_DEV/#supported-data-sources","title":"Supported data sources","text":"

Currently, these data sources are supported (valid data_source value).

Name data_source value Needs to return a \"table\"* Overview (more details on this page below) Script script no Executes any linux command in the CMD setting. NetAlertX DB query app-db-query yes Executes a SQL query on the NetAlertX database in the CMD setting. Template template no Used to generate internal settings, such as default values. External SQLite DB query sqlite-db-query yes Executes a SQL query from the CMD setting on an external SQLite database mapped in the DB_PATH setting. Plugin type plugin_type no Specifies the type of the plugin and in which section the Plugin settings are displayed ( one of general/system/scanner/other/publisher ).
  • \"Needs to return a \"table\" means that the application expects a last_result.<prefix>.log file with some results. It's not a blocker, however warnings in the app.log might be logged.

\ud83d\udd0eExample json \"data_source\": \"app-db-query\" If you want to display plugin objects or import devices into the app, data sources have to return a \"table\" of the exact structure as outlined above.

You can show or hide the UI on the \"Plugins\" page and \"Plugins\" tab for a plugin on devices via the show_ui property:

\ud83d\udd0eExample json \"show_ui\": true,

"},{"location":"PLUGINS_DEV/#data_source-script","title":"\"data_source\": \"script\"","text":"

If the data_source is set to script the CMD setting (that you specify in the settings array section in the config.json) contains an executable Linux command, that usually generates a last_result.<prefix>.log file (not required if you don't import any data into the app). The last_result.<prefix>.log file needs to be saved in /api/log/plugins.

Important

A lot of the work is taken care of by the plugin_helper.py library. You don't need to manage the last_result.<prefix>.log file if using the helper objects. Check other script.py of other plugins for details.

The content of the last_result.<prefix>.log file needs to contain the columns as defined in the \"Column order and values\" section above. The order of columns can't be changed. After every scan it should contain only the results from the latest scan/execution.

  • The format of the last_result.<prefix>.log is a csv-like file with the pipe | as a separator.
  • 9 (nine) values need to be supplied, so every line needs to contain 8 pipe separators. Empty values are represented by null.
  • Don't render \"headers\" for these \"columns\". Every scan result/event entry needs to be on a new line.
  • You can find which \"columns\" need to be present, and if the value is required or optional, in the \"Column order and values\" section.
  • The order of these \"columns\" can't be changed.
"},{"location":"PLUGINS_DEV/#last_resultprefixlog-examples","title":"\ud83d\udd0e last_result.prefix.log examples","text":"

Valid CSV:

\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|null|null|null|null\nhttps://www.duckduckgo.com|192.168.0.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|ff:ee:ff:11:ff:11\n\n

Invalid CSV with different errors on each line:

\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898||null|null|null\nhttps://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|\n|https://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|null\nnull|192.168.1.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine\nhttps://www.duckduckgo.com|192.168.1.1|2023-01-02 15:56:30|null|0.9898|null|null|Best search engine\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|||\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|\n\n
"},{"location":"PLUGINS_DEV/#data_source-app-db-query","title":"\"data_source\": \"app-db-query\"","text":"

If the data_source is set to app-db-query, the CMD setting needs to contain a SQL query rendering the columns as defined in the \"Column order and values\" section above. The order of columns is important.

This SQL query is executed on the app.db SQLite database file.

\ud83d\udd0eExample

SQL query example:

SQL SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, 'null' as Watched_Value3, 'null' as Watched_Value4, ns.Extra as Extra, dv.devMac as ForeignKey FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac

Required CMD setting example with above query (you can set \"type\": \"label\" if you want it to make uneditable in the UI):

json { \"function\": \"CMD\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, 'null' as Watched_Value3, 'null' as Watched_Value4, ns.Extra as Extra FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac\", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"SQL to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"This SQL query is used to populate the coresponding UI tables under the Plugins section.\" }] }

"},{"location":"PLUGINS_DEV/#data_source-template","title":"\"data_source\": \"template\"","text":"

In most cases, it is used to initialize settings. Check the newdev_template plugin for details.

"},{"location":"PLUGINS_DEV/#data_source-sqlite-db-query","title":"\"data_source\": \"sqlite-db-query\"","text":"

You can execute a SQL query on an external database connected to the current NetAlertX database via a temporary EXTERNAL_<unique prefix>. prefix.

For example for PIHOLE (\"unique_prefix\": \"PIHOLE\") it is EXTERNAL_PIHOLE.. The external SQLite database file has to be mapped in the container to the path specified in the DB_PATH setting:

\ud83d\udd0eExample

json ... { \"function\": \"DB_PATH\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [{\"readonly\": \"true\"}] ,\"transformers\": []}]}, \"default_value\":\"/etc/pihole/pihole-FTL.db\", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"DB Path\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"Required setting for the <code>sqlite-db-query</code> plugin type. Is used to mount an external SQLite database and execute the SQL query stored in the <code>CMD</code> setting.\" }] } ...

The actual SQL query you want to execute is then stored as a CMD setting, similar to a Plugin of the app-db-query plugin type. The format has to adhere to the format outlined in the \"Column order and values\" section above.

\ud83d\udd0eExample

Notice the EXTERNAL_PIHOLE. prefix.

json { \"function\": \"CMD\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"SELECT hwaddr as Object_PrimaryID, cast('http://' || (SELECT ip FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as VARCHAR(100)) || ':' || cast( SUBSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), 0, INSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, macVendor as Watched_Value1, lastQuery as Watched_Value2, (SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as Watched_Value3, 'null' as Watched_Value4, '' as Extra, hwaddr as ForeignKey FROM EXTERNAL_PIHOLE.network WHERE hwaddr NOT LIKE 'ip-%' AND hwaddr <> '00:00:00:00:00:00'; \", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"SQL to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"This SQL query is used to populate the coresponding UI tables under the Plugins section. This particular one selects data from a mapped PiHole SQLite database and maps it to the corresponding Plugin columns.\" }] }

"},{"location":"PLUGINS_DEV/#filters","title":"\ud83d\udd73 Filters","text":"

Plugin entries can be filtered in the UI based on values entered into filter fields. The txtMacFilter textbox/field contains the Mac address of the currently viewed device, or simply a Mac address that's available in the mac query string (<url>?mac=aa:22:aa:22:aa:22:aa).

Property Required Description compare_column yes Plugin column name that's value is used for comparison (Left side of the equation) compare_operator yes JavaScript comparison operator compare_field_id yes The id of a input text field containing a value is used for comparison (Right side of the equation) compare_js_template yes JavaScript code used to convert left and right side of the equation. {value} is replaced with input values. compare_use_quotes yes If true then the end result of the compare_js_template i swrapped in \" quotes. Use to compare strings.

Filters are only applied if a filter is specified, and the txtMacFilter is not undefined, or empty (--).

\ud83d\udd0eExample:

json \"data_filters\": [ { \"compare_column\" : \"Object_PrimaryID\", \"compare_operator\" : \"==\", \"compare_field_id\": \"txtMacFilter\", \"compare_js_template\": \"'{value}'.toString()\", \"compare_use_quotes\": true } ],

  1. On the pluginsCore.php page is an input field with the id txtMacFilter:

html <input class=\"form-control\" id=\"txtMacFilter\" type=\"text\" value=\"--\">

  1. This input field is initialized via the &mac= query string.

  2. The app then proceeds to use this Mac value from this field and compares it to the value of the Object_PrimaryID database field. The compare_operator is ==.

  3. Both values, from the database field Object_PrimaryID and from the txtMacFilter are wrapped and evaluated with the compare_js_template, that is '{value}.toString()'.

  4. compare_use_quotes is set to true so '{value}'.toString() is wrappe dinto \" quotes.

  5. This results in for example this code:

javascript // left part of the expression coming from compare_column and right from the input field // notice the added quotes ()\") around the left and right part of teh expression \"eval('ac:82:ac:82:ac:82\".toString()')\" == \"eval('ac:82:ac:82:ac:82\".toString()')\"

"},{"location":"PLUGINS_DEV/#mapping-the-plugin-results-into-a-database-table","title":"\ud83d\uddfa Mapping the plugin results into a database table","text":"

Plugin results are always inserted into the standard Plugin_Objects database table. Optionally, NetAlertX can take the results of the plugin execution, and insert these results into an additional database table. This is enabled by with the property \"mapped_to_table\" in the config.json file. The mapping of the columns is defined in the database_column_definitions array.

Note

If results are mapped to the CurrentScan table, the data is then included into the regular scan loop, so for example notification for devices are sent out.

\ud83d\udd0d Example:

For example, this approach is used to implement the DHCPLSS plugin. The script parses all supplied \"dhcp.leases\" files, gets the results in the generic table format outlined in the \"Column order and values\" section above, takes individual values, and inserts them into the CurrentScan database table in the NetAlertX database. All this is achieved by:

  1. Specifying the database table into which the results are inserted by defining \"mapped_to_table\": \"CurrentScan\" in the root of the config.json file as shown below:

json { \"code_name\": \"dhcp_leases\", \"unique_prefix\": \"DHCPLSS\", ... \"data_source\": \"script\", \"localized\": [\"display_name\", \"description\", \"icon\"], \"mapped_to_table\": \"CurrentScan\", ... } 2. Defining the target column with the mapped_to_column property for individual columns in the database_column_definitions array of the config.json file. For example in the DHCPLSS plugin, I needed to map the value of the Object_PrimaryID column returned by the plugin, to the cur_MAC column in the NetAlertX database table CurrentScan. Notice the \"mapped_to_column\": \"cur_MAC\" key-value pair in the sample below.

json { \"column\": \"Object_PrimaryID\", \"mapped_to_column\": \"cur_MAC\", \"css_classes\": \"col-sm-2\", \"show\": true, \"type\": \"device_mac\", \"default_value\":\"\", \"options\": [], \"localized\": [\"name\"], \"name\":[{ \"language_code\":\"en_us\", \"string\" : \"MAC address\" }] }

  1. That's it. The app takes care of the rest. It loops thru the objects discovered by the plugin, takes the results line-by-line, and inserts them into the database table specified in \"mapped_to_table\". The columns are translated from the generic plugin columns to the target table columns via the \"mapped_to_column\" property in the column definitions.

Note

You can create a column mapping with a default value via the mapped_to_column_data property. This means that the value of the given column will always be this value. That also means that the \"column\": \"NameDoesntMatter\" is not important as there is no database source column.

\ud83d\udd0d Example:

json { \"column\": \"NameDoesntMatter\", \"mapped_to_column\": \"cur_ScanMethod\", \"mapped_to_column_data\": { \"value\": \"DHCPLSS\" }, \"css_classes\": \"col-sm-2\", \"show\": true, \"type\": \"device_mac\", \"default_value\":\"\", \"options\": [], \"localized\": [\"name\"], \"name\":[{ \"language_code\":\"en_us\", \"string\" : \"MAC address\" }] }

"},{"location":"PLUGINS_DEV/#params","title":"params","text":"

Important

An esier way to access settings in scripts is the get_setting_value method. ```python from helper import get_setting_value

... NTFY_TOPIC = get_setting_value('NTFY_TOPIC') ...

```

The params array in the config.json is used to enable the user to change the parameters of the executed script. For example, the user wants to monitor a specific URL.

\ud83d\udd0e Example: Passing user-defined settings to a command. Let's say, you want to have a script, that is called with a user-defined parameter called urls:

bash root@server# python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com

  • You can allow the user to add URLs to a setting with the function property set to a custom name, such as urls_to_check (this is not a reserved name from the section \"Supported settings function values\" below).
  • You specify the parameter urls in the params section of the config.json the following way (WEBMON_ is the plugin prefix automatically added to all the settings):
{\n    \"params\" : [\n        {\n            \"name\"  : \"urls\",\n            \"type\"  : \"setting\",\n            \"value\" : \"WEBMON_urls_to_check\"\n        }]\n}\n
  • Then you use this setting as an input parameter for your command in the CMD setting. Notice urls={urls} in the below json:
 {\n            \"function\": \"CMD\",\n            \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]},\n            \"default_value\":\"python3 /app/front/plugins/website_monitor/script.py urls={urls}\",\n            \"options\": [],\n            \"localized\": [\"name\", \"description\"],\n            \"name\" : [{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Command\"\n            }],\n            \"description\": [{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Command to run\"\n            }]\n        }\n

During script execution, the app will take the command \"python3 /app/front/plugins/website_monitor/script.py urls={urls}\", take the {urls} wildcard and replace it with the value from the WEBMON_urls_to_check setting. This is because:

  1. The app checks the params entries
  2. It finds \"name\" : \"urls\"
  3. Checks the type of the urls params and finds \"type\" : \"setting\"
  4. Gets the setting name from \"value\" : \"WEBMON_urls_to_check\"
  5. IMPORTANT: in the config.json this setting is identified by \"function\":\"urls_to_check\", not \"function\":\"WEBMON_urls_to_check\"
  6. You can also use a global setting, or a setting from a different plugin
  7. The app gets the user defined value from the setting with the code name WEBMON_urls_to_check
  8. let's say the setting with the code name WEBMON_urls_to_check contains 2 values entered by the user:
  9. WEBMON_urls_to_check=['https://google.com','https://duck.com']
  10. The app takes the value from WEBMON_urls_to_check and replaces the {urls} wildcard in the setting where \"function\":\"CMD\", so you go from:
  11. python3 /app/front/plugins/website_monitor/script.py urls={urls}
  12. to
  13. python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com

Below are some general additional notes, when defining params:

  • \"name\":\"name_value\" - is used as a wildcard replacement in the CMD setting value by using curly brackets {name_value}. The wildcard is replaced by the result of the \"value\" : \"param_value\" and \"type\":\"type_value\" combo configuration below.
  • \"type\":\"<sql|setting>\" - is used to specify the type of the params, currently only 2 supported (sql,setting).
  • \"type\":\"sql\" - will execute the SQL query specified in the value property. The sql query needs to return only one column. The column is flattened and separated by commas (,), e.g: SELECT devMac from DEVICES -> Internet,74:ac:74:ac:74:ac,44:44:74:ac:74:ac. This is then used to replace the wildcards in the CMD setting.
  • \"type\":\"setting\" - The setting code name. A combination of the value from unique_prefix + _ + function value, or otherwise the code name you can find in the Settings page under the Setting display name, e.g. PIHOLE_RUN.
  • \"value\": \"param_value\" - Needs to contain a setting code name or SQL query without wildcards.
  • \"timeoutMultiplier\" : true - used to indicate if the value should multiply the max timeout for the whole script run by the number of values in the given parameter.
  • \"base64\": true - use base64 encoding to pass the value to the script (e.g. if there are spaces)

\ud83d\udd0eExample:

json { \"params\" : [{ \"name\" : \"ips\", \"type\" : \"sql\", \"value\" : \"SELECT devLastIP from DEVICES\", \"timeoutMultiplier\" : true }, { \"name\" : \"macs\", \"type\" : \"sql\", \"value\" : \"SELECT devMac from DEVICES\" }, { \"name\" : \"timeout\", \"type\" : \"setting\", \"value\" : \"NMAP_RUN_TIMEOUT\" }, { \"name\" : \"args\", \"type\" : \"setting\", \"value\" : \"NMAP_ARGS\", \"base64\" : true }] }

"},{"location":"PLUGINS_DEV/#setting-object-structure","title":"\u2699 Setting object structure","text":"

Note

The settings flow and when Plugin specific settings are applied is described under the Settings system.

Required attributes are:

Property Description \"function\" Specifies the function the setting drives or a simple unique code name. See Supported settings function values for options. \"type\" Specifies the form control used for the setting displayed in the Settings page and what values are accepted. Supported options include: - {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [{\"type\":\"password\"}] ,\"transformers\": [\"sha256\"]}]} \"localized\" A list of properties on the current JSON level that need to be localized. \"name\" Displayed on the Settings page. An array of localized strings. See Localized strings below. \"description\" Displayed on the Settings page. An array of localized strings. See Localized strings below. (optional) \"events\" Specifies whether to generate an execution button next to the input field of the setting. Supported values: - \"test\" - For notification plugins testing - \"run\" - Regular plugins testing (optional) \"override_value\" Used to determine a user-defined override for the setting. Useful for template-based plugins, where you can choose to leave the current value or override it with the value defined in the setting. (Work in progress) (optional) \"events\" Used to trigger the plugin. Usually used on the RUN setting. Not fully tested in all scenarios. Will show a play button next to the setting. After clicking, an event is generated for the backend in the Parameters database table to process the front-end event on the next run."},{"location":"PLUGINS_DEV/#ui-component-types-documentation","title":"UI Component Types Documentation","text":"

This section outlines the structure and types of UI components, primarily used to build HTML forms or interactive elements dynamically. Each UI component has a \"type\" which defines its structure, behavior, and rendering options.

"},{"location":"PLUGINS_DEV/#ui-component-json-structure","title":"UI Component JSON Structure","text":"

The UI component is defined as a JSON object containing a list of elements. Each element specifies how it should behave, with properties like elementType, elementOptions, and any associated transformers to modify the data. The example below demonstrates how a component with two elements (span and select) is structured:

{\n      \"function\": \"devIcon\",\n      \"type\": {\n        \"dataType\": \"string\",\n        \"elements\": [\n          {\n            \"elementType\": \"span\",\n            \"elementOptions\": [\n              { \"cssClasses\": \"input-group-addon iconPreview\" },\n              { \"getStringKey\": \"Gen_SelectToPreview\" },\n              { \"customId\": \"NEWDEV_devIcon_preview\" }\n            ],\n            \"transformers\": []\n          },\n          {\n            \"elementType\": \"select\",\n            \"elementHasInputValue\": 1,\n            \"elementOptions\": [\n              { \"cssClasses\": \"col-xs-12\" },\n              {\n                \"onChange\": \"updateIconPreview(this)\"\n              },\n              { \"customParams\": \"NEWDEV_devIcon,NEWDEV_devIcon_preview\" }\n            ],\n            \"transformers\": []\n          }          \n        ]\n      }\n}\n\n
"},{"location":"PLUGINS_DEV/#rendering-logic","title":"Rendering Logic","text":"

The code snippet provided demonstrates how the elements are iterated over to generate their corresponding HTML. Depending on the elementType, different HTML tags (like <select>, <input>, <textarea>, <button>, etc.) are created with the respective attributes such as onChange, my-data-type, and class based on the provided elementOptions. Events can also be attached to elements like buttons or select inputs.

"},{"location":"PLUGINS_DEV/#key-element-types","title":"Key Element Types","text":"
  • select: Renders a dropdown list. Additional options like isMultiSelect and event handlers (e.g., onChange) can be attached.
  • input: Handles various types of input fields, including checkboxes, text, and others, with customizable attributes like readOnly, placeholder, etc.
  • button: Generates clickable buttons with custom event handlers (onClick), icons, or labels.
  • textarea: Creates a multi-line input box for text input.
  • span: Used for inline text or content with customizable classes and data attributes.

Each element may also have associated events (e.g., running a scan or triggering a notification) defined under Events.

"},{"location":"PLUGINS_DEV/#supported-settings-function-values","title":"Supported settings function values","text":"

You can have any \"function\": \"my_custom_name\" custom name, however, the ones listed below have a specific functionality attached to them.

Setting Description RUN (required) Specifies when the service is executed. Supported Options: - \"disabled\" - do not run - \"once\" - run on app start or on settings saved - \"schedule\" - if included, then a RUN_SCHD setting needs to be specified to determine the schedule - \"always_after_scan\" - run always after a scan is finished - \"before_name_updates\" - run before device names are updated (for name discovery plugins) - \"on_new_device\" - run when a new device is detected - \"before_config_save\" - run before the config is marked as saved. Useful if your plugin needs to modify the app.conf file. RUN_SCHD (required if you include \"schedule\" in the above RUN function) Cron-like scheduling is used if the RUN setting is set to schedule. CMD (required) Specifies the command that should be executed. API_SQL (not implemented) Generates a table_ + code_name + .json file as per API docs. RUN_TIMEOUT (optional) Specifies the maximum execution time of the script. If not specified, a default value of 10 seconds is used to prevent hanging. WATCH (optional) Specifies which database columns are watched for changes for this particular plugin. If not specified, no notifications are sent. REPORT_ON (optional) Specifies when to send a notification. Supported options are: - new means a new unique (unique combination of PrimaryId and SecondaryId) object was discovered. - watched-changed - means that selected Watched_ValueN columns changed - watched-not-changed - reports even on events where selected Watched_ValueN did not change - missing-in-last-scan - if the object is missing compared to previous scans

\ud83d\udd0e Example:

json { \"function\": \"RUN\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"select\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"disabled\", \"options\": [\"disabled\", \"once\", \"schedule\", \"always_after_scan\", \"on_new_device\"], \"localized\": [\"name\", \"description\"], \"name\" :[{ \"language_code\":\"en_us\", \"string\" : \"When to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"Enable a regular scan of your services. If you select <code>schedule</code> the scheduling settings from below are applied. If you select <code>once</code> the scan is run only once on start of the application (container) for the time specified in <a href=\\\"#WEBMON_RUN_TIMEOUT\\\"><code>WEBMON_RUN_TIMEOUT</code> setting</a>.\" }] }

"},{"location":"PLUGINS_DEV/#localized-strings","title":"\ud83c\udf0dLocalized strings","text":"
  • \"language_code\":\"<en_us|es_es|de_de>\" - code name of the language string. Only these three are currently supported. At least the \"language_code\":\"en_us\" variant has to be defined.
  • \"string\" - The string to be displayed in the given language.

\ud83d\udd0e Example:

```json

{\n    \"language_code\":\"en_us\",\n    \"string\" : \"When to run\"\n}\n

```

"},{"location":"PLUGINS_DEV/#ui-settings-in-database_column_definitions","title":"UI settings in database_column_definitions","text":"

The UI will adjust how columns are displayed in the UI based on the resolvers definition of the database_column_definitions object. These are the supported form controls and related functionality:

  • Only columns with \"show\": true and also with at least an English translation will be shown in the UI.
Supported Types Description label Displays a column only. textarea_readonly Generates a read only text area and cleans up the text to display it somewhat formatted with new lines preserved. See below for information on threshold, replace. options Property Used in conjunction with types like threshold, replace, regex. options_params Property Used in conjunction with a \"options\": \"[{value}]\" template and text.select/list.select. Can specify SQL query (needs to return 2 columns SELECT devName as name, devMac as id) or Setting (not tested) to populate the dropdown. Check example below or have a look at the NEWDEV plugin config.json file. threshold The options array contains objects ordered from the lowest maximum to the highest. The corresponding hexColor is used for the value background color if it's less than the specified maximum but more than the previous one in the options array. replace The options array contains objects with an equals property, which is compared to the \"value.\" If the values are the same, the string in replacement is displayed in the UI instead of the actual \"value\". regex Applies a regex to the value. The options array contains objects with an type (must be set to regex) and param (must contain the regex itself) property. Type Definitions device_mac The value is considered to be a MAC address, and a link pointing to the device with the given MAC address is generated. device_ip The value is considered to be an IP address. A link pointing to the device with the given IP is generated. The IP is checked against the last detected IP address and translated into a MAC address, which is then used for the link itself. device_name_mac The value is considered to be a MAC address, and a link pointing to the device with the given MAC is generated. The link label is resolved as the target device name. url The value is considered to be a URL, so a link is generated. textbox_save Generates an editable and saveable text box that saves values in the database. Primarily intended for the UserData database column in the Plugins_Objects table. url_http_https Generates two links with the https and http prefix as lock icons. eval Evaluates as JavaScript. Use the variable value to use the given column value as input (e.g. '<b>${value}<b>' (replace ' with ` in your code) )

Note

Supports chaining. You can chain multiple resolvers with .. For example regex.url_http_https. This will apply the regex resolver and then the url_http_https resolver.

        \"function\": \"devType\",\n        \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"select\", \"elementOptions\" : [] ,\"transformers\": []}]},\n        \"maxLength\": 30,\n        \"default_value\": \"\",\n        \"options\": [\"{value}\"],\n        \"options_params\" : [\n            {\n                \"name\"  : \"value\",\n                \"type\"  : \"sql\",\n                \"value\" : \"SELECT '' as id, '' as name UNION SELECT devType as id, devType as name FROM (SELECT devType FROM Devices UNION SELECT 'Smartphone' UNION SELECT 'Tablet' UNION SELECT 'Laptop' UNION SELECT 'PC' UNION SELECT 'Printer' UNION SELECT 'Server' UNION SELECT 'NAS' UNION SELECT 'Domotic' UNION SELECT 'Game Console' UNION SELECT 'SmartTV' UNION SELECT 'Clock' UNION SELECT 'House Appliance' UNION SELECT 'Phone' UNION SELECT 'AP' UNION SELECT 'Gateway' UNION SELECT 'Firewall' UNION SELECT 'Switch' UNION SELECT 'WLAN' UNION SELECT 'Router' UNION SELECT 'Other') AS all_devices ORDER BY id;\"\n            },\n            {\n                \"name\"  : \"uilang\",\n                \"type\"  : \"setting\",\n                \"value\" : \"UI_LANG\"\n            }\n        ]\n
{\n            \"column\": \"Watched_Value1\",\n            \"css_classes\": \"col-sm-2\",\n            \"show\": true,\n            \"type\": \"threshold\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"maximum\": 199,\n                    \"hexColor\": \"#792D86\"                \n                },\n                {\n                    \"maximum\": 299,\n                    \"hexColor\": \"#5B862D\"\n                },\n                {\n                    \"maximum\": 399,\n                    \"hexColor\": \"#7D862D\"\n                },\n                {\n                    \"maximum\": 499,\n                    \"hexColor\": \"#BF6440\"\n                },\n                {\n                    \"maximum\": 599,\n                    \"hexColor\": \"#D33115\"\n                }\n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Status code\"\n                }]\n        },        \n        {\n            \"column\": \"Status\",\n            \"show\": true,\n            \"type\": \"replace\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"equals\": \"watched-not-changed\",\n                    \"replacement\": \"<i class='fa-solid fa-square-check'></i>\"\n                },\n                {\n                    \"equals\": \"watched-changed\",\n                    \"replacement\": \"<i class='fa-solid fa-triangle-exclamation'></i>\"\n                },\n                {\n                    \"equals\": \"new\",\n                    \"replacement\": \"<i class='fa-solid fa-circle-plus'></i>\"\n                }\n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Status\"\n                }]\n        },\n        {\n            \"column\": \"Watched_Value3\",\n            \"css_classes\": \"col-sm-1\",\n            \"show\": true,\n            \"type\": \"regex.url_http_https\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"type\": \"regex\",\n                    \"param\": \"([\\\\d.:]+)\"\n                }          \n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"HTTP/s links\"\n                },\n                {\n                \"language_code\":\"es_es\",\n                \"string\" : \"N/A\"\n                }]\n        }\n
"},{"location":"PLUGINS_DEV_CONFIG/","title":"Plugins Implementation Details","text":"

Plugins provide data to the NetAlertX core, which processes it to detect changes, discover new devices, raise alerts, and apply heuristics.

"},{"location":"PLUGINS_DEV_CONFIG/#overview-plugin-data-flow","title":"Overview: Plugin Data Flow","text":"
  1. Each plugin runs on a defined schedule.
  2. Aligning all plugin schedules is recommended so they execute in the same loop.
  3. During execution, all plugins write their collected data into the CurrentScan table.
  4. After all plugins complete, the CurrentScan table is evaluated to detect new devices, changes, and triggers.

Although plugins run independently, they contribute to the shared CurrentScan table. To inspect its contents, set LOG_LEVEL=trace and check for the log section:

================ CurrentScan table content ================\n
"},{"location":"PLUGINS_DEV_CONFIG/#configjson-lifecycle","title":"config.json Lifecycle","text":"

This section outlines how each plugin\u2019s config.json manifest is read, validated, and used by the core and plugins. It also describes plugin output expectations and the main plugin categories.

Tip

For detailed schema and examples, see the Plugin Development Guide.

"},{"location":"PLUGINS_DEV_CONFIG/#1-loading","title":"1. Loading","text":"
  • On startup, the core loads config.json for each plugin.
  • The file acts as a plugin manifest, defining metadata, runtime configuration, and database mappings.
"},{"location":"PLUGINS_DEV_CONFIG/#2-validation","title":"2. Validation","text":"
  • The core validates required keys (for example, RUN).
  • Missing or invalid entries may be replaced with defaults or cause the plugin to be disabled.
"},{"location":"PLUGINS_DEV_CONFIG/#3-preparation","title":"3. Preparation","text":"
  • Plugin parameters (paths, commands, and options) are prepared for execution.
  • Database mappings (mapped_to_table, database_column_definitions) are parsed to define how data integrates with the main app.
"},{"location":"PLUGINS_DEV_CONFIG/#4-execution","title":"4. Execution","text":"
  • Plugins may run:

  • On a fixed schedule.

  • Once at startup.
  • After a notification or other trigger.
  • The scheduler executes plugins according to their interval.
"},{"location":"PLUGINS_DEV_CONFIG/#5-parsing","title":"5. Parsing","text":"
  • Plugin output must be pipe-delimited (|).
  • The core parses each output line following the Plugin Interface Contract, splitting and mapping fields accordingly.
"},{"location":"PLUGINS_DEV_CONFIG/#6-mapping","title":"6. Mapping","text":"
  • Parsed fields are inserted into the plugin\u2019s Plugins_* table.
  • Data can be mapped into other tables (e.g., Devices, CurrentScan) as defined by:

  • database_column_definitions

  • mapped_to_table

Example: Object_PrimaryID \u2192 devMAC

"},{"location":"PLUGINS_DEV_CONFIG/#6a-plugin-output-contract","title":"6a. Plugin Output Contract","text":"

All plugins must follow the Plugin Interface Contract defined in PLUGINS_DEV.md. Output values are pipe-delimited in a fixed order.

"},{"location":"PLUGINS_DEV_CONFIG/#identifiers","title":"Identifiers","text":"
  • Object_PrimaryID and Object_SecondaryID uniquely identify records (for example, MAC|IP).
"},{"location":"PLUGINS_DEV_CONFIG/#watched-values-watched_value14","title":"Watched Values (Watched_Value1\u20134)","text":"
  • Used by the core to detect changes between runs.
  • Changes in these fields can trigger notifications.
"},{"location":"PLUGINS_DEV_CONFIG/#extra-field-extra","title":"Extra Field (Extra)","text":"
  • Optional additional value.
  • Stored in the database but not used for alerts.
"},{"location":"PLUGINS_DEV_CONFIG/#helper-values-helper_value13","title":"Helper Values (Helper_Value1\u20133)","text":"
  • Optional auxiliary data (for display or plugin logic).
  • Stored but not alert-triggering.
"},{"location":"PLUGINS_DEV_CONFIG/#mapping","title":"Mapping","text":"
  • While the output format is flexible, the plugin\u2019s manifest determines the destination and type of each field.
"},{"location":"PLUGINS_DEV_CONFIG/#7-persistence","title":"7. Persistence","text":"
  • Parsed data is upserted into the database.
  • Conflicts are resolved using the combined key: Object_PrimaryID + Object_SecondaryID.
"},{"location":"PLUGINS_DEV_CONFIG/#plugin-categories","title":"Plugin Categories","text":"

Plugins fall into several functional categories depending on their purpose and expected outputs.

"},{"location":"PLUGINS_DEV_CONFIG/#1-device-discovery-plugins","title":"1. Device Discovery Plugins","text":"
  • Inputs: None, subnet, or discovery API.
  • Outputs: MAC and IP for new or updated device records in Devices.
  • Mapping: Required \u2013 usually into CurrentScan.
  • Examples: ARPSCAN, NMAPDEV.
"},{"location":"PLUGINS_DEV_CONFIG/#2-device-data-enrichment-plugins","title":"2. Device Data Enrichment Plugins","text":"
  • Inputs: Device identifiers (MAC, IP).
  • Outputs: Additional metadata (for example, open ports or sensors).
  • Mapping: Controlled via manifest definitions.
  • Examples: NMAP, MQTT.
"},{"location":"PLUGINS_DEV_CONFIG/#3-name-resolver-plugins","title":"3. Name Resolver Plugins","text":"
  • Inputs: Device identifiers (MAC, IP, hostname`).
  • Outputs: Updated devName and devFQDN.
  • Mapping: Typically none.
  • Note: Adding new resolvers currently requires a core change.
  • Examples: NBTSCAN, NSLOOKUP.
"},{"location":"PLUGINS_DEV_CONFIG/#4-generic-plugins","title":"4. Generic Plugins","text":"
  • Inputs: Custom, based on the plugin logic or script.
  • Outputs: Data displayed under Integrations \u2192 Plugins only.
  • Mapping: Not required.
  • Examples: INTRSPD, custom monitoring scripts.
"},{"location":"PLUGINS_DEV_CONFIG/#5-configuration-only-plugins","title":"5. Configuration-Only Plugins","text":"
  • Inputs/Outputs: None at runtime.
  • Purpose: Used for configuration or maintenance tasks.
  • Examples: MAINT, CSVBCKP.
"},{"location":"PLUGINS_DEV_CONFIG/#post-processing","title":"Post-Processing","text":"

After persistence:

  • The core generates notifications for any watched value changes.
  • The UI updates with new or modified data.
  • Plugins with UI-enabled data display under Integrations \u2192 Plugins.
"},{"location":"PLUGINS_DEV_CONFIG/#summary","title":"Summary","text":"

The lifecycle of a plugin configuration is:

Load \u2192 Validate \u2192 Prepare \u2192 Execute \u2192 Parse \u2192 Map \u2192 Persist \u2192 Post-process

Each plugin must:

  • Follow the output contract.
  • Declare its type and expected output structure.
  • Define mappings and watched values clearly in config.json.
"},{"location":"RANDOM_MAC/","title":"Privacy & Random MAC's","text":"

Some operating systems incorporate randomize MAC addresses to improve privacy.

This functionality allows you to hide the real MAC of the device and assign a random MAC when we connect to WIFI networks.

This behavior is especially useful when connecting to WIFI's that we do not know, but it is totally useless when connecting to our own WIFI's or known networks.

I recommend disabling this on-device functionality when connecting our devices to our own WIFI's, this way, NetAlertX will be able to identify the device, and it will not identify it as a new device every so often (every time iOS or Android randomizes the MAC).

Random MACs are recognized by the characters \"2\", \"6\", \"A\", or \"E\" as the 2nd character in the Mac address. You can disable specific prefixes to be detected as random MAC addresses by specifying the UI_NOT_RANDOM_MAC setting.

"},{"location":"RANDOM_MAC/#windows","title":"Windows","text":"
  • How to Disable MAC Randomization on Windows
"},{"location":"RANDOM_MAC/#ios","title":"IOS","text":"
  • Use private Wi-Fi addresses in iOS 14
"},{"location":"RANDOM_MAC/#android","title":"Android","text":"
  • How to Disable MAC Randomization in Android 10
  • How do I disable random Wi-Fi MAC address on Android 10
"},{"location":"REMOTE_NETWORKS/","title":"Scanning Remote or Inaccessible Networks","text":"

By design, local network scanners such as arp-scan use ARP (Address Resolution Protocol) to map IP addresses to MAC addresses on the local network. Since ARP operates at Layer 2 (Data Link Layer), it typically works only within a single broadcast domain, usually limited to a single router or network segment.

Note

Ping and ARPSCAN use different protocols so even if you can ping devices it doesn't mean ARPSCAN can detect them.

To scan multiple locally accessible network segments, add them as subnets according to the subnets documentation. If ARPSCAN is not suitable for your setup, read on.

"},{"location":"REMOTE_NETWORKS/#complex-use-cases","title":"Complex Use Cases","text":"

The following network setups might make some devices undetectable with ARPSCAN. Check the specific setup to understand the cause and find potential workarounds to report on these devices.

"},{"location":"REMOTE_NETWORKS/#wi-fi-extenders","title":"Wi-Fi Extenders","text":"

Wi-Fi extenders typically create a separate network or subnet, which can prevent network scanning tools like arp-scan from detecting devices behind the extender.

Possible workaround: Scan the specific subnet that the extender uses, if it is separate from the main network.

"},{"location":"REMOTE_NETWORKS/#vpns","title":"VPNs","text":"

ARP operates at Layer 2 (Data Link Layer) and works only within a local area network (LAN). VPNs, which operate at Layer 3 (Network Layer), route traffic between networks, preventing ARP requests from discovering devices outside the local network.

VPNs use virtual interfaces (e.g., tun0, tap0) to encapsulate traffic, bypassing ARP-based discovery. Additionally, many VPNs use NAT, which masks individual devices behind a shared IP address.

Possible workaround: Configure the VPN to bridge networks instead of routing to enable ARP, though this depends on the VPN setup and security requirements.

"},{"location":"REMOTE_NETWORKS/#other-workarounds","title":"Other Workarounds","text":"

The following workarounds should work for most complex network setups.

"},{"location":"REMOTE_NETWORKS/#supplementing-plugins","title":"Supplementing Plugins","text":"

You can use supplementary plugins that employ alternate methods. Protocols used by the SNMPDSC or DHCPLSS plugins are widely supported on different routers and can be effective as workarounds. Check the plugins list to find a plugin that works with your router and network setup.

"},{"location":"REMOTE_NETWORKS/#multiple-netalertx-instances","title":"Multiple NetAlertX Instances","text":"

If you have servers in different networks, you can set up separate NetAlertX instances on those subnets and synchronize the results into one instance using the SYNC plugin.

"},{"location":"REMOTE_NETWORKS/#manual-entry","title":"Manual Entry","text":"

If you don't need to discover new devices and only need to report on their status (online, offline, down), you can manually enter devices and check their status using the ICMP plugin, which uses the ping command internally.

For more information on how to add devices manually (or dummy devices), refer to the Device Management documentation.

To create truly dummy devices, you can use a loopback IP address (e.g., 0.0.0.0 or 127.0.0.1) so they appear online.

"},{"location":"REMOTE_NETWORKS/#nmap-and-fake-mac-addresses","title":"NMAP and Fake MAC Addresses","text":"

Scanning remote networks with NMAP is possible (via the NMAPDEV plugin), but since it cannot retrieve the MAC address, you need to enable the NMAPDEV_FAKE_MAC setting. This will generate a fake MAC address based on the IP address, allowing you to track devices. However, this can lead to inconsistencies, especially if the IP address changes or a previously logged device is rediscovered. If this setting is disabled, only the IP address will be discovered, and devices with missing MAC addresses will be skipped.

Check the NMAPDEV plugin for details

"},{"location":"REVERSE_DNS/","title":"Reverse DNS","text":""},{"location":"REVERSE_DNS/#setting-up-better-name-discovery-with-reverse-dns","title":"Setting up better name discovery with Reverse DNS","text":"

If you are running a DNS server, such as AdGuard, set up Private reverse DNS servers for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.

Tip

Before proceeding, ensure that name resolution plugins are enabled. You can customize how names are cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

Example 1: Reverse DNS disabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58 ** server can't find 58.1.168.192.in-addr.arpa: NXDOMAIN

Example 2: Reverse DNS enabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58 45.1.168.192.in-addr.arpa name = jokob-NUC.localdomain.

"},{"location":"REVERSE_DNS/#enabling-reverse-dns-in-adguard","title":"Enabling reverse DNS in AdGuard","text":"
  1. Navigate to Settings -> DNS Settings
  2. Locate Private reverse DNS servers
  3. Enter your router IP address, such as 192.168.1.1
  4. Make sure you have Use private reverse DNS resolvers ticked.
  5. Click Apply to save your settings.
"},{"location":"REVERSE_DNS/#specifying-the-dns-in-the-container","title":"Specifying the DNS in the container","text":"

You can specify the DNS server in the docker-compose to improve name resolution on your network.

services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n...\n    dns:           # specifying the DNS servers used for the container\n      - 10.8.0.1\n      - 10.8.0.17\n
"},{"location":"REVERSE_DNS/#using-a-custom-resolvconf-file","title":"Using a custom resolv.conf file","text":"

You can configure a custom /etc/resolv.conf file in docker-compose.yml and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant resolv.conf man entry for details.

"},{"location":"REVERSE_DNS/#docker-composeyml","title":"docker-compose.yml:","text":"
version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    volumes:\n...\n      - /local_data_dir/config/resolv.conf:/etc/resolv.conf                          # \u26a0 Mapping the /resolv.conf file for better name resolution\n...\n
"},{"location":"REVERSE_DNS/#local_data_dirconfigresolvconf","title":"/local_data_dir/config/resolv.conf:","text":"

The most important below is the nameserver entry (you can add multiple):

nameserver 192.168.178.11\noptions edns0 trust-ad\nsearch example.com\n
"},{"location":"REVERSE_PROXY/","title":"Reverse Proxy Configuration","text":"

Submitted by amazing cvc90 \ud83d\ude4f

Note

There are various NGINX config files for NetAlertX, some for the bare-metal install, currently Debian 12 and Ubuntu 24 (netalertx.conf), and one for the docker container (netalertx.template.conf).

The first one you can find in the respective bare metal installer folder /app/install/\\<system\\>/netalertx.conf. The docker one can be found in the install folder. Map, or use, the one appropriate for your setup.

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-direct-path","title":"NGINX HTTP Configuration (Direct Path)","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server {\n     listen 80;\n     server_name netalertx;\n     proxy_preserve_host on;\n     proxy_pass http://localhost:20211/;\n     proxy_pass_reverse http://localhost:20211/;\n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-sub-path","title":"NGINX HTTP Configuration (Sub Path)","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server {\n     listen 80;\n     server_name netalertx;\n     proxy_preserve_host on;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/;\n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-sub-path-with-module-ngx_http_sub_module","title":"NGINX HTTP Configuration (Sub Path) with module ngx_http_sub_module","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server {\n     listen 80;\n     server_name netalertx;\n     proxy_preserve_host on;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/;\n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n      sub_filter_once off;\n      sub_filter_types *;\n      sub_filter 'href=\"/' 'href=\"/netalertx/';\n      sub_filter '(?>$host)/css' '/netalertx/css';\n      sub_filter '(?>$host)/js'  '/netalertx/js';\n      sub_filter '/img' '/netalertx/img';\n      sub_filter '/lib' '/netalertx/lib';\n      sub_filter '/php' '/netalertx/php';\n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/

NGINX HTTPS Configuration (Direct Path)

  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server {\n     listen 443;\n     server_name netalertx;\n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     proxy_preserve_host on;\n     proxy_pass http://localhost:20211/;\n     proxy_pass_reverse http://localhost:20211/;\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/

NGINX HTTPS Configuration (Sub Path)

  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server {\n     listen 443;\n     server_name netalertx;\n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/;\n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#nginx-https-configuration-sub-path-with-module-ngx_http_sub_module","title":"NGINX HTTPS Configuration (Sub Path) with module ngx_http_sub_module","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server {\n     listen 443;\n     server_name netalertx;\n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/;\n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n      sub_filter_once off;\n      sub_filter_types *;\n      sub_filter 'href=\"/' 'href=\"/netalertx/';\n      sub_filter '(?>$host)/css' '/netalertx/css';\n      sub_filter '(?>$host)/js'  '/netalertx/js';\n      sub_filter '/img' '/netalertx/img';\n      sub_filter '/lib' '/netalertx/lib';\n      sub_filter '/php' '/netalertx/php';\n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#apache-http-configuration-direct-path","title":"Apache HTTP Configuration (Direct Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:80>\n         ServerName netalertx\n         ProxyPreserveHost On\n         ProxyPass / http://localhost:20211/\n         ProxyPassReverse / http://localhost:20211/\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#apache-http-configuration-sub-path","title":"Apache HTTP Configuration (Sub Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:80>\n         ServerName netalertx\n         location ^~ /netalertx/ {\n               ProxyPreserveHost On\n               ProxyPass / http://localhost:20211/\n               ProxyPassReverse / http://localhost:20211/\n         }\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#apache-https-configuration-direct-path","title":"Apache HTTPS Configuration (Direct Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:443>\n         ServerName netalertx\n         SSLEngine On\n         SSLCertificateFile /etc/ssl/certs/netalertx.pem\n         SSLCertificateKeyFile /etc/ssl/private/netalertx.key\n         ProxyPreserveHost On\n         ProxyPass / http://localhost:20211/\n         ProxyPassReverse / http://localhost:20211/\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

    a2ensite netalertx or service apache2 reload

  3. Once Apache restarts, you should be able to access the proxy website at https://netalertx/

"},{"location":"REVERSE_PROXY/#apache-https-configuration-sub-path","title":"Apache HTTPS Configuration (Sub Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:443>\n        ServerName netalertx\n        SSLEngine On\n        SSLCertificateFile /etc/ssl/certs/netalertx.pem\n        SSLCertificateKeyFile /etc/ssl/private/netalertx.key\n        location ^~ /netalertx/ {\n              ProxyPreserveHost On\n              ProxyPass / http://localhost:20211/\n              ProxyPassReverse / http://localhost:20211/\n        }\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#reverse-proxy-example-by-using-linuxservers-swag-container","title":"Reverse proxy example by using LinuxServer's SWAG container.","text":"

Submitted by s33d1ing. \ud83d\ude4f

"},{"location":"REVERSE_PROXY/#linuxserverswag","title":"linuxserver/swag","text":"

In the SWAG container create /config/nginx/proxy-confs/netalertx.subfolder.conf with the following contents:

## Version 2023/02/05\n# make sure that your netalertx container is named netalertx\n# netalertx does not require a base url setting\n\n# Since NetAlertX uses a Host network, you may need to use the IP address of the system running NetAlertX for $upstream_app.\n\nlocation /netalertx {\n    return 301 $scheme://$host/netalertx/;\n}\n\nlocation ^~ /netalertx/ {\n    # enable the next two lines for http auth\n    #auth_basic \"Restricted\";\n    #auth_basic_user_file /config/nginx/.htpasswd;\n\n    # enable for ldap auth (requires ldap-server.conf in the server block)\n    #include /config/nginx/ldap-location.conf;\n\n    # enable for Authelia (requires authelia-server.conf in the server block)\n    #include /config/nginx/authelia-location.conf;\n\n    # enable for Authentik (requires authentik-server.conf in the server block)\n    #include /config/nginx/authentik-location.conf;\n\n    include /config/nginx/proxy.conf;\n    include /config/nginx/resolver.conf;\n\n    set $upstream_app netalertx;\n    set $upstream_port 20211;\n    set $upstream_proto http;\n\n    proxy_pass $upstream_proto://$upstream_app:$upstream_port;\n    proxy_set_header Accept-Encoding \"\";\n\n    proxy_redirect ~^/(.*)$ /netalertx/$1;\n    rewrite ^/netalertx/?(.*)$ /$1 break;\n\n    sub_filter_once off;\n    sub_filter_types *;\n\n    sub_filter 'href=\"/' 'href=\"/netalertx/';\n\n    sub_filter '(?>$host)/css' '/netalertx/css';\n    sub_filter '(?>$host)/js'  '/netalertx/js';\n\n    sub_filter '/img' '/netalertx/img';\n    sub_filter '/lib' '/netalertx/lib';\n    sub_filter '/php' '/netalertx/php';\n}\n

"},{"location":"REVERSE_PROXY/#traefik","title":"Traefik","text":"

Submitted by Isegrimm \ud83d\ude4f (based on this discussion)

Assuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.

Note: Everything in these configs assumes 'www.domain.com' as your domainname and 'section31' as an arbitrary name for your certificate setup. You will have to substitute these with your own.

Also, I use the prefix 'netalertx'. If you want to use another prefix, change it in these files: dynamic.toml and default.

Content of my yaml-file (this is the generic Traefik config, which defines which ports to listen on, redirect http to https and sets up the certificate process). It also contains Authelia, which I use for authentication. This part contains nothing specific to NetAlertX.

version: '3.8'\n\nservices:\n  traefik:\n    image: traefik\n    container_name: traefik\n    command:\n      - \"--api=true\"\n      - \"--api.insecure=true\"\n      - \"--api.dashboard=true\"\n      - \"--entrypoints.web.address=:80\"\n      - \"--entrypoints.web.http.redirections.entryPoint.to=websecure\"\n      - \"--entrypoints.web.http.redirections.entryPoint.scheme=https\"\n      - \"--entrypoints.websecure.address=:443\"\n      - \"--providers.file.filename=/traefik-config/dynamic.toml\"\n      - \"--providers.file.watch=true\"\n      - \"--log.level=ERROR\"\n      - \"--certificatesresolvers.section31.acme.email=postmaster@domain.com\"\n      - \"--certificatesresolvers.section31.acme.storage=/traefik-config/acme.json\"\n      - \"--certificatesresolvers.section31.acme.httpchallenge=true\"\n      - \"--certificatesresolvers.section31.acme.httpchallenge.entrypoint=web\"\n    ports:\n      - \"80:80\"\n      - \"443:443\"\n      - \"8080:8080\"\n    volumes:\n      - \"/var/run/docker.sock:/var/run/docker.sock:ro\"\n      - /appl/docker/traefik/config:/traefik-config\n    depends_on:\n      - authelia\n    restart: unless-stopped\n  authelia:\n    container_name: authelia\n    image: authelia/authelia:latest\n    ports:\n      - \"9091:9091\"\n    volumes:\n      - /appl/docker/authelia:/config\n    restart: u\n    nless-stopped\n

Snippet of the dynamic.toml file (referenced in the yml-file above) that defines the config for NetAlertX: The following are self-defined keywords, everything else is traefik keywords: - netalertx-router - netalertx-service - auth - netalertx-stripprefix

[http.routers]\n  [http.routers.netalertx-router]\n    entryPoints = [\"websecure\"]\n    rule = \"Host(`www.domain.com`) && PathPrefix(`/netalertx`)\"\n    service = \"netalertx-service\"\n    middlewares = \"auth,netalertx-stripprefix\"\n    [http.routers.netalertx-router.tls]\n       certResolver = \"section31\"\n       [[http.routers.netalertx-router.tls.domains]]\n         main = \"www.domain.com\"\n\n[http.services]\n  [http.services.netalertx-service]\n    [[http.services.netalertx-service.loadBalancer.servers]]\n      url = \"http://internal-ip-address:20211/\"\n\n[http.middlewares]\n  [http.middlewares.auth.forwardAuth]\n    address = \"http://authelia:9091/api/verify?rd=https://www.domain.com/authelia/\"\n    trustForwardHeader = true\n    authResponseHeaders = [\"Remote-User\", \"Remote-Groups\", \"Remote-Name\", \"Remote-Email\"]\n  [http.middlewares.netalertx-stripprefix.stripprefix]\n    prefixes = \"/netalertx\"\n    forceSlash = false\n\n

To make NetAlertX work with this setup I modified the default file at /etc/nginx/sites-available/default in the docker container by copying it to my local filesystem, adding the changes as specified by cvc90 and mounting the new file into the docker container, overwriting the original one. By mapping the file instead of changing the file in-place, the changes persist if an updated dockerimage is pulled. This is also a downside when the default file is updated, so I only use this as a temporary solution, until the dockerimage is updated with this change.

Default-file:

server {\n    listen 80 default_server;\n    root /var/www/html;\n    index index.php;\n    #rewrite /netalertx/(.*) / permanent;\n    add_header X-Forwarded-Prefix \"/netalertx\" always;\n    proxy_set_header X-Forwarded-Prefix \"/netalertx\";\n\n  location ~* \\.php$ {\n    fastcgi_pass unix:/run/php/php8.2-fpm.sock;\n    include         fastcgi_params;\n    fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;\n    fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;\n    fastcgi_connect_timeout 75;\n          fastcgi_send_timeout 600;\n          fastcgi_read_timeout 600;\n  }\n}\n

Mapping the updated file (on the local filesystem at /appl/docker/netalertx/default) into the docker container:

...\n  volumes:\n    - /appl/docker/netalertx/default:/etc/nginx/sites-available/default\n...\n
"},{"location":"SECURITY/","title":"Security Considerations","text":""},{"location":"SECURITY/#responsibility-disclaimer","title":"\ud83e\udded Responsibility Disclaimer","text":"

NetAlertX provides powerful tools for network scanning, presence detection, and automation. However, it is up to you\u2014the deployer\u2014to ensure that your instance is properly secured.

This includes (but is not limited to): - Controlling who has access to the UI and API - Following network and container security best practices - Running NetAlertX only on networks where you have legal authorization - Keeping your deployment up to date with the latest patches

NetAlertX is not responsible for misuse, misconfiguration, or unsecure deployments. Always test and secure your setup before exposing it to the outside world.

"},{"location":"SECURITY/#securing-your-netalertx-instance","title":"\ud83d\udd10 Securing Your NetAlertX Instance","text":"

NetAlertX is a powerful network scanning and automation framework. With that power comes responsibility. It is your responsibility to secure your deployment, especially if you're running it outside a trusted local environment.

"},{"location":"SECURITY/#tldr-key-security-recommendations","title":"\u26a0\ufe0f TL;DR \u2013 Key Security Recommendations","text":"
  • \u2705 NEVER expose NetAlertX directly to the internet without protection
  • \u2705 Use a VPN or Tailscale to access remotely
  • \u2705 Enable password protection for the web UI
  • \u2705 Harden your container environment (e.g., no unnecessary privileges)
  • \u2705 Use firewalls and IP whitelisting
  • \u2705 Keep the software updated
  • \u2705 Limit the scope of plugins and API keys
"},{"location":"SECURITY/#access-control-with-vpn-or-tailscale","title":"\ud83d\udd17 Access Control with VPN (or Tailscale)","text":"

NetAlertX is designed to be run on private LANs, not the open internet.

Recommended: Use a VPN to access NetAlertX from remote locations.

"},{"location":"SECURITY/#tailscale-easy-vpn-alternative","title":"\u2705 Tailscale (Easy VPN Alternative)","text":"

Tailscale sets up a private mesh network between your devices. It's fast to configure and ideal for NetAlertX. \ud83d\udc49 Get started with Tailscale

"},{"location":"SECURITY/#web-ui-password-protection","title":"\ud83d\udd11 Web UI Password Protection","text":"

By default, NetAlertX does not require login. Before exposing the UI in any way:

  1. Enable password protection: ini SETPWD_enable_password=true SETPWD_password=your_secure_password

  2. Passwords are stored as SHA256 hashes

  3. Default password (if not changed): 123456 \u2014 change it ASAP!

To disable authenticated login, set SETPWD_enable_password=false in app.conf

"},{"location":"SECURITY/#additional-security-measures","title":"\ud83d\udd25 Additional Security Measures","text":"
  • Firewall / Network Rules Restrict UI/API access to trusted IPs only.

  • Limit Docker Capabilities Avoid --privileged. Use --cap-add=NET_RAW and others only if required by your scan method.

  • Keep NetAlertX Updated Regular updates contain bug fixes and security patches.

  • Plugin Permissions Disable unused plugins. Only install from trusted sources.

  • Use Read-Only API Keys When integrating NetAlertX with other tools, scope keys tightly.

"},{"location":"SECURITY/#docker-hardening-tips","title":"\ud83e\uddf1 Docker Hardening Tips","text":"
  • Use read-only mount options where possible (:ro)
  • Avoid running as root unless absolutely necessary
  • Consider using docker scan or other container image vulnerability scanners
  • Run with --network host only on trusted networks and only if needed for ARP-based scans
"},{"location":"SECURITY/#responsible-disclosure","title":"\ud83d\udce3 Responsible Disclosure","text":"

If you discover a vulnerability or security concern, please report it privately to:

\ud83d\udce7 jokob@duck.com

We take security seriously and will work to patch confirmed issues promptly. Your help in responsible disclosure is appreciated!

By following these recommendations, you can ensure your NetAlertX deployment is both powerful and secure.

"},{"location":"SECURITY_FEATURES/","title":"NetAlertX Security: A Layered Defense","text":"

Your network security monitor has the \"keys to the kingdom,\" making it a prime target for attackers. If it gets compromised, the game is over.

NetAlertX is engineered from the ground up to prevent this. It's not just an app; it's a purpose-built security appliance. Its core design is built on a zero-trust philosophy, which is a modern way of saying we assume a breach will happen and plan for it. This isn't a single \"lock on the door\"; it's a \"defense-in-depth\" strategy, more like a medieval castle with a moat, high walls, and guards at every door.

Here\u2019s a breakdown of the defensive layers you get, right out of the box using the default configuration.

"},{"location":"SECURITY_FEATURES/#feature-1-the-digital-concrete-filesystem","title":"Feature 1: The \"Digital Concrete\" Filesystem","text":"

Methodology: The core application and its system files are treated as immutable. Once built, the app's code is \"set in concrete,\" preventing attackers from modifying it or planting malware.

  • Immutable Filesystem: At runtime, the container's entire filesystem is set to read_only: true. The application code, system libraries, and all other files are literally frozen. This single control neutralizes a massive range of common attacks.

  • \"Ownership-as-a-Lock\" Pattern: During the build, all system files are assigned to a special readonly user. This user has no login shell and no power to write to any files, even its own. It\u2019s a clever, defense-in-depth locking mechanism.

  • Data Segregation: All user-specific data (like configurations and the device database) is stored completely outside the container in Docker volumes. The application is disposable; the data is persistent.

What's this mean to you: Even if an attacker gets in, they cannot modify the application code or plant malware. It's like the app is set in digital concrete.

"},{"location":"SECURITY_FEATURES/#feature-2-surgical-keycard-only-access","title":"Feature 2: Surgical, \"Keycard-Only\" Access","text":"

Methodology: The principle of least privilege is strictly enforced. Every process gets only the absolute minimum set of permissions needed for its specific job.

  • Non-Privileged Execution: The entire NetAlertX stack runs as a dedicated, low-power, non-root user (netalertx). No \"god mode\" privileges are available to the application.

  • Kernel-Level Capability Revocation: The container is launched with cap_drop: - ALL, which tells the Linux kernel to revoke all \"root-like\" special powers.

  • Binary-Specific Privileges (setcap): This is the \"keycard\" metaphor in action. After revoking all powers, the system uses setcap to grant specific, necessary permissions only to the binaries that absolutely require them (like nmap and arp-scan). This means that even if an attacker compromises the web server, they can't start scanning the network. The web server's \"keycard\" doesn't open the \"scanning\" door.

What's this mean to you: A security breach is firewalled. An attacker who gets into the web UI does not have the \"keycard\" to start scanning your network or take over the system. The breach is contained.

"},{"location":"SECURITY_FEATURES/#feature-3-attack-surface-amputation","title":"Feature 3: Attack Surface \"Amputation\"","text":"

Methodology: The potential attack surface is aggressively minimized by removing every non-essential tool an attacker would want to use.

  • Package Manager Removal: The hardened build stage explicitly deletes the Alpine package manager (apk del apk-tools). This makes it impossible for an attacker to simply apk add their malicious toolkit.

  • sudo Neutralization: All sudo configurations are removed, and the /usr/bin/sudo command is replaced with a non-functional shim. Any attempt to escalate privileges this way will fail.

  • Build Toolchain Elimination: The Dockerfile uses a multi-stage build. The initial \"builder\" stage, which contains all the powerful compilers (gcc) and development tools, is completely discarded. The final production image is lean and contains no build tools.

  • Minimal User & Group Files: The hardened stage scrubs the system's passwd and group files, removing all default system users to minimize potential avenues for privilege escalation.

What's this mean to you: An attacker who breaks in finds themselves in an empty room with no tools. They have no sudo to get more power, no package manager to download weapons, and no compilers to build new ones.

"},{"location":"SECURITY_FEATURES/#feature-4-self-cleaning-writable-areas","title":"Feature 4: \"Self-Cleaning\" Writable Areas","text":"

Methodology: All writable locations are treated as untrusted, temporary, and non-executable by default.

  • In-Memory Volatile Storage: The docker-compose.yml configuration maps all temporary directories (e.g., /tmp/log, /tmp/api, /tmp) to in-memory tmpfs filesystems. They do not exist on the host's disk.

  • Volatile Data: Because these locations exist only in RAM, their contents are instantly and irrevocably erased when the container is stopped. This provides a \"self-cleaning\" mechanism that purges any attacker-dropped files or payloads on every single restart.

  • Secure Mount Flags: These in-memory mounts are configured with the noexec flag. This is a critical security control: it prohibits the execution of any binary or script from a location that is writable.

What's this mean to you: Any malicious file an attacker does manage to drop is written in invisible, non-permanent ink. The file is written to RAM, not disk, so it vaporizes the instant you restart the container. Even worse for them, the noexec flag means they can't even run the file in the first place.

"},{"location":"SECURITY_FEATURES/#feature-5-built-in-resource-guardrails","title":"Feature 5: Built-in Resource Guardrails","text":"

Methodology: The container is constrained by resource limits to function as a \"good citizen\" on the host system. This prevents a compromised or runaway process from consuming excessive resources, a common vector for Denial of Service (DoS) attacks.

  • Process Limiting: The docker-compose.yml defines a pids_limit: 512. This directly mitigates \"fork bomb\" attacks, where a process attempts to crash the host by recursively spawning thousands of new processes.

  • Memory & CPU Limits: The configuration file defines strict resource limits to prevent any single process from exhausting the host's available system resources.

What's this mean to you: NetAlertX is a \"good neighbor\" and can't be used to crash your host machine. Even if a process is compromised, it's in a digital straitjacket and cannot pull a \"denial of service\" attack by hogging all your CPU or memory.

"},{"location":"SECURITY_FEATURES/#feature-6-the-pre-flight-self-check","title":"Feature 6: The \"Pre-Flight\" Self-Check","text":"

Methodology: Before any services start, NetAlertX runs a comprehensive \"pre-flight\" check to ensure its own security and configuration are sound. It's like a built-in auditor who verifies its own defenses.

  • Active Self-Diagnosis: On every single boot, NetAlertX runs a series of startup pre-checks\u2014and it's fast. The entire self-check process typically completes in less than a second, letting you get to the web UI in about three seconds from startup.

  • Validates Its Own Security: These checks actively inspect the other security features. For example, check-0-permissions.sh validates that all the \"Digital Concrete\" files are locked down and all the \"Self-Cleaning\" areas are writable, just as they should be. It also checks that the correct netalertx user is running the show, not root.

  • Catches Misconfigurations: This system acts as a \"safety inspector\" that catches misconfigurations before they can become security holes. If you've made a mistake in your configuration (like a bad folder permission or incorrect network mode), NetAlertX will tell you in the logs why it can't start, rather than just failing silently.

What's this mean to you: The system is self-aware and checks its own work. You get instant feedback if a setting is wrong, and you get peace of mind on every single boot knowing all these security layers are active and verified, all in about one second.

"},{"location":"SECURITY_FEATURES/#conclusion-security-by-default","title":"Conclusion: Security by Default","text":"

No single security control is a silver bullet. The robust security posture of NetAlertX is achieved through defense in depth, layering these methodologies.

An adversary must not only gain initial access but must also find a way to write a payload to a non-executable, in-memory location, without access to any standard system tools, sudo, or a package manager. And they must do this while operating as an unprivileged user in a resource-limited environment where the application code is immutable and actively checks its own integrity on every boot.

"},{"location":"SESSION_INFO/","title":"Sessions Section \u2013 Device View","text":"

The Sessions Section shows a device\u2019s connection history. All data is automatically detected and cannot be edited.

"},{"location":"SESSION_INFO/#key-fields","title":"Key Fields","text":"Field Description Editable? First Connection The first time the device was detected on the network. \u274c Auto-detected Last Connection The most recent time the device was online. \u274c Auto-detected"},{"location":"SESSION_INFO/#how-session-information-works","title":"How Session Information Works","text":""},{"location":"SESSION_INFO/#1-detecting-new-devices","title":"1. Detecting New Devices","text":"
  • New devices are automatically detected when they first appear on the network.
  • A New Device record is created, capturing the MAC, IP, vendor, and detection time.
"},{"location":"SESSION_INFO/#2-recording-connection-sessions","title":"2. Recording Connection Sessions","text":"
  • Every time a device connects, a session entry is created.
  • Captured details include:

  • Connection type (wired or wireless)

  • Connection time
  • Device details (MAC, IP, vendor)
"},{"location":"SESSION_INFO/#3-handling-missing-or-conflicting-data","title":"3. Handling Missing or Conflicting Data","text":"
  • Triggers: Devices are flagged when session data is incomplete, inconsistent, or conflicting. Examples include:

  • Missing first or last connection timestamps

  • Overlapping session records
  • Sessions showing a device as connected and disconnected at the same time

  • System response:

  • Automatically highlights affected devices in the Sessions Section.

  • Attempts to infer missing information from available data, such as:

    • Estimating first or last connection times from nearby session events
    • Correcting overlapping session periods
    • Reconciling conflicting connection statuses
  • User impact:

  • Users do not need to manually fix session data.

  • The system ensures the device\u2019s connection history remains as accurate as possible for monitoring and reporting.
"},{"location":"SESSION_INFO/#4-updating-sessions","title":"4. Updating Sessions","text":"
  • Reconnect: Updates session with the new connection timestamp.
  • Disconnect: Records disconnection time and marks the device as offline.

This session information feeds directly into Monitoring \u2192 Presence, providing a live view of which devices are currently online.

"},{"location":"SETTINGS_SYSTEM/","title":"Settings","text":""},{"location":"SETTINGS_SYSTEM/#setting-system","title":"\u2699 Setting system","text":"

This is an explanation how settings are handled intended for anyone thinking about writing their own plugin or contributing to the project.

If you are a user of the app, settings have a detailed description in the Settings section of the app. Open an issue if you'd like to clarify any of the settings.

"},{"location":"SETTINGS_SYSTEM/#data-storage","title":"\ud83d\udee2 Data storage","text":"

The source of truth for user-defined values is the app.conf file. Editing the file makes the App overwrite values in the Settings database table and in the table_settings.json file.

"},{"location":"SETTINGS_SYSTEM/#settings-database-table","title":"Settings database table","text":"

The Settings database table contains settings for App run purposes. The table is recreated every time the App restarts. The settings are loaded from the source-of-truth, that is the app.conf file. A high-level overview on the database structure can be found in the database documentation.

"},{"location":"SETTINGS_SYSTEM/#table_settingsjson","title":"table_settings.json","text":"

This is the API endpoint that reflects the state of the Settings database table. Settings can be accessed with the:

  • getSetting(key) JavaScript method

The json file is also cached on the client-side local storage of the browser.

"},{"location":"SETTINGS_SYSTEM/#appconf","title":"app.conf","text":"

Note

This is the source of truth for settings. User-defined values in this files always override default values specified in the Plugin definition.

The App generates two app.conf entries for every setting (Since version 23.8+). One entry is the setting value, the second is the __metadata associated with the setting. This __metadata entry contains the full setting definition in JSON format. Currently unused, but intended to be used in future to extend the Settings system.

"},{"location":"SETTINGS_SYSTEM/#plugin-settings","title":"Plugin settings","text":"

Note

This is the preferred way adding settings going forward. I'll be likely migrating all app settings into plugin-based settings.

Plugin settings are loaded dynamically from the config.json of individual plugins. If a setting isn't defined in the app.conf file, it is initialized via the default_value property of a setting from the config.json file. Check the Plugins documentation, section \u2699 Setting object structure for details on the structure of the setting.

"},{"location":"SETTINGS_SYSTEM/#settings-process-flow","title":"Settings Process flow","text":"

The process flow is mostly managed by the initialise.py file.

The script is responsible for reading user-defined values from a configuration file (app.conf), initializing settings, and importing them into a database. It also handles plugins and their configurations.

Here's a high-level description of the code:

  1. Function Definitions:
  2. ccd: This function is used to handle user-defined settings and configurations. It takes several parameters related to the setting's name, default value, input type, options, group, and more. It saves the settings and their metadata in different lists (conf.mySettingsSQLsafe and conf.mySettings).

  3. importConfigs: This function is the main entry point of the script. It imports user settings from a configuration file, processes them, and saves them to the database.

  4. read_config_file: This function reads the configuration file (app.conf) and returns a dictionary containing the key-value pairs from the file.

  5. Importing Configuration and Initializing Settings:

  6. The importConfigs function starts by checking the modification time of the configuration file to determine if it needs to be re-imported. If the file has not been modified since the last import, the function skips the import process.

  7. The function reads the configuration file using the read_config_file function, which returns a dictionary of settings.

  8. The script then initializes various user-defined settings using the ccd function, based on the values read from the configuration file. These settings are categorized into groups such as \"General,\" \"Email,\" \"Webhooks,\" \"Apprise,\" and more.

  9. Plugin Handling:

  10. The script loads and handles plugins dynamically. It retrieves plugin configurations and iterates through each plugin.
  11. For each plugin, it extracts the prefix and settings related to that plugin and processes them similarly to other user-defined settings.
  12. It also handles scheduling for plugins with specific RUN_SCHD settings.

  13. Saving Settings to the Database:

  14. The script clears the existing settings in the database and inserts the updated settings into the database using SQL queries.

  15. Updating the API and Performing Cleanup:

  16. After importing the configurations, the script updates the API to reflect the changes in the settings.
  17. It saves the current timestamp to determine the next import time.
  18. Finally, it logs the successful import of the new configuration.
"},{"location":"SMTP/","title":"\ud83d\udce7 SMTP server guides","text":"

The SMTP plugin supports any SMTP server. Here are some commonly used services to help speed up your configuration.

Note

If you are using a self hosted SMTP server ssh into the container and verify (e.g. via ping) that your server is reachable from within the NetAlertX container. See also how to ssh into the container if you are running it as a Home Assistant addon.

"},{"location":"SMTP/#gmail","title":"Gmail","text":"
  1. Create an app password by following the instructions from Google, you need to Enable 2FA for this to work. https://support.google.com/accounts/answer/185833

  2. Specify the following settings:

    SMTP_RUN='on_notification'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True \n    SMTP_PORT=465\n    SMTP_SERVER='smtp.gmail.com'\n    SMTP_PASS='16-digit passcode from google'\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SMTP/#brevo","title":"Brevo","text":"

Brevo allows for 300 free emails per day as of time of writing.

  1. Create an account on Brevo: https://www.brevo.com/free-smtp-server/
  2. Click your name -> SMTP & API
  3. Click Generate a new SMTP key
  4. Save the details and fill in the NetAlertX settings as below.
SMTP_SERVER='smtp-relay.brevo.com'\nSMTP_PORT=587\nSMTP_SKIP_LOGIN=False\nSMTP_USER='user@email.com'\nSMTP_PASS='xsmtpsib-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxx'\nSMTP_SKIP_TLS=False\nSMTP_FORCE_SSL=False\nSMTP_REPORT_TO='some_target_email@gmail.com'\nSMTP_REPORT_FROM='NetAlertX <user@email.com>'\n
"},{"location":"SMTP/#gmx","title":"GMX","text":"
  1. Go to your GMX account https://account.gmx.com
  2. Under Security Options enable 2FA (Two-factor authentication)
  3. Under Security Options generate an Application-specific password
  4. Home -> Email Settings -> POP3 & IMAP -> Enable access to this account via POP3 and IMAP
  5. In NetAlertX specify these settings:
    SMTP_RUN='on_notification'\n    SMTP_SERVER='mail.gmx.com'\n    SMTP_PORT=465\n    SMTP_USER='gmx_email@gmx.com'\n    SMTP_PASS='<your Application-specific password>'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True\n    SMTP_SKIP_LOGIN=False\n    SMTP_REPORT_FROM='gmx_email@gmx.com' # this has to be the same email as in SMTP_USER\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SUBNETS/","title":"Subnets Configuration","text":"

You need to specify the network interface and the network mask. You can also configure multiple subnets and specify VLANs (see VLAN exceptions below).

ARPSCAN can scan multiple networks if the network allows it. To scan networks directly, the subnets must be accessible from the network where NetAlertX is running. This means NetAlertX needs to have access to the interface attached to that subnet.

Warning

If you don't see all expected devices run the following command in the NetAlertX container (replace the interface and ip mask): sudo arp-scan --interface=eth0 192.168.1.0/24

If this command returns no results, the network is not accessible due to your network or firewall restrictions (Wi-Fi Extenders, VPNs and inaccessible networks). If direct scans are not possible, check the remote networks documentation for workarounds.

"},{"location":"SUBNETS/#example-values","title":"Example Values","text":"

Note

Please use the UI to configure settings as it ensures the config file is in the correct format. Edit app.conf directly only when really necessary.

  • Examples for one and two subnets:
  • One subnet: SCAN_SUBNETS = ['192.168.1.0/24 --interface=eth0']
  • Two subnets: SCAN_SUBNETS = ['192.168.1.0/24 --interface=eth0','192.168.1.0/24 --interface=eth1 --vlan=107']

Tip

When adding more subnets, you may need to increase both the scan interval (ARPSCAN_RUN_SCHD) and the timeout (ARPSCAN_RUN_TIMEOUT)\u2014as well as similar settings for related plugins.

If the timeout is too short, you may see timeout errors in the log. To prevent the application from hanging due to unresponsive plugins, scans are canceled when they exceed the timeout limit.

To fix this: - Reduce the subnet size (e.g., change /16 to /24). - Increase the timeout (e.g., set ARPSCAN_RUN_TIMEOUT to 300 for a 5-minute timeout). - Extend the scan interval (e.g., set ARPSCAN_RUN_SCHD to */10 * * * * to scan every 10 minutes).

For more troubleshooting tips, see Debugging Plugins.

"},{"location":"SUBNETS/#explanation","title":"Explanation","text":""},{"location":"SUBNETS/#network-mask","title":"Network Mask","text":"

Example value: 192.168.1.0/24

The arp-scan time itself depends on the number of IP addresses to check.

The number of IPs to check depends on the network mask you set in the SCAN_SUBNETS setting. For example, a /24 mask results in 256 IPs to check, whereas a /16 mask checks around 65,536 IPs. Each IP takes a couple of seconds, so an incorrect configuration could make arp-scan take hours instead of seconds.

Specify the network filter, which significantly speeds up the scan process. For example, the filter 192.168.1.0/24 covers IP ranges from 192.168.1.0 to 192.168.1.255.

"},{"location":"SUBNETS/#network-interface-adapter","title":"Network Interface (Adapter)","text":"

Example value: --interface=eth0

The adapter will probably be eth0 or eth1. (Check System Info > Network Hardware, or run iwconfig in the container to find your interface name(s)).

Tip

As an alternative to iwconfig, run ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' in your container to find your interface name(s) (e.g.: eth0, eth1): bash Synology-NAS:/# ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' sit0@NONE eth1 eth0

"},{"location":"SUBNETS/#vlans","title":"VLANs","text":"

Example value: --vlan=107

  • Append --vlan=107 to the SCAN_SUBNETS field (e.g.: 192.168.1.0/24 --interface=vmbr0 --vlan=107) for multiple VLANs.
"},{"location":"SUBNETS/#vlans-on-a-hyper-v-setup","title":"VLANs on a Hyper-V Setup","text":"

Community-sourced content by mscreations from this discussion.

Tested Setup: Bare Metal \u2192 Hyper-V on Win Server 2019 \u2192 Ubuntu 22.04 VM \u2192 Docker \u2192 NetAlertX.

Approach 1 (may cause issues): Configure multiple network adapters in Hyper-V with distinct VLANs connected to each one using Hyper-V's network setup. However, this action can potentially lead to the Docker host's inability to handle network traffic correctly. This might interfere with other applications such as Authentik.

Approach 2 (working example):

Network connections to switches are configured as trunk and allow all VLANs access to the server.

By default, Hyper-V only allows untagged packets through to the VM interface, blocking VLAN-tagged packets. To fix this, follow these steps:

  1. Run the following command in PowerShell on the Hyper-V machine:

powershell Set-VMNetworkAdapterVlan -VMName <Docker VM Name> -Trunk -NativeVlanId 0 -AllowedVlanIdList \"<comma separated list of vlans>\"

  1. Within the VM, set up sub-interfaces for each VLAN to enable scanning. On Ubuntu 22.04, Netplan can be used. In /etc/netplan/00-installer-config.yaml, add VLAN definitions:

yaml network: ethernets: eth0: dhcp4: yes vlans: eth0.2: id: 2 link: eth0 addresses: [ \"192.168.2.2/24\" ] routes: - to: 192.168.2.0/24 via: 192.168.1.1

  1. Run sudo netplan apply to activate the interfaces for scanning in NetAlertX.

In this case, use 192.168.2.0/24 --interface=eth0.2 in NetAlertX.

"},{"location":"SUBNETS/#vlan-support-exceptions","title":"VLAN Support & Exceptions","text":"

Please note the accessibility of macvlans when configured on the same computer. This is a general networking behavior, but feel free to clarify via a PR/issue.

  • NetAlertX does not detect the macvlan container when it is running on the same computer.
  • NetAlertX recognizes the macvlan container when it is running on a different computer.
"},{"location":"SYNOLOGY_GUIDE/","title":"Installation on a Synology NAS","text":"

There are different ways to install NetAlertX on a Synology, including SSH-ing into the machine and using the command line. For this guide, we will use the Project option in Container manager.

"},{"location":"SYNOLOGY_GUIDE/#create-the-folder-structure","title":"Create the folder structure","text":"

The folders you are creating below will contain the configuration and the database. Back them up regularly.

  1. Create a parent folder named netalertx
  2. Create a db sub-folder

  1. Create a config sub-folder

  1. Note down the folders Locations:

  1. Open Container manager -> Project and click Create.
  2. Fill in the details:

  3. Project name: netalertx

  4. Path: /app_storage/netalertx (will differ from yours)
  5. Paste in the following template:
version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    cap_drop:       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:        # Re-add necessary capabilities\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n    volumes:\n      - /app_storage/netalertx:/data\n      # to sync with system time\n      - /etc/localtime:/etc/localtime:ro\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n    environment:\n      - PORT=20211\n

  1. Replace the paths to your volume and comment out unnecessary line(s):

  2. This is only an example, your paths will differ.

 volumes:\n      - /volume1/app_storage/netalertx:/data\n

  1. (optional) Change the port number from 20211 to an unused port if this port is already used.
  2. Build the project:

  1. Navigate to <Synology URL>:20211 (or your custom port).
  2. Read the Subnets and Plugins docs to complete your setup.

Tip

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

sudo chown -R 20211:20211 /local_data_dir

sudo chmod -R a+rwx /local_data_dir

"},{"location":"UPDATES/","title":"Docker Update Strategies to upgrade NetAlertX","text":"

Warning

For versions prior to v25.6.7 upgrade to version v25.5.24 first (docker pull ghcr.io/jokob-sk/netalertx:25.5.24) as later versions don't support a full upgrade. Alternatively, devices and settings can be migrated manually, e.g. via CSV import. See the Migration guide for details.

This guide outlines approaches for updating Docker containers, usually when upgrading to a newer version of NetAlertX. Each method offers different benefits depending on the situation. Here are the methods:

  • Manual: Direct commands to stop, remove, and rebuild containers.
  • Dockcheck: Semi-automated with more control, suited for bulk updates.
  • Watchtower: Fully automated, runs continuously to check and update containers.
  • Portainer: Manual with UI.

You can choose any approach that fits your workflow.

In the examples I assume that the container name is netalertx and the image name is netalertx as well.

Note

See also Backup strategies to be on the safe side.

"},{"location":"UPDATES/#1-manual-updates","title":"1. Manual Updates","text":"

Use this method when you need precise control over a single container or when dealing with a broken container that needs immediate attention. Example Commands

To manually update the netalertx container, stop it, delete it, remove the old image, and start a fresh one with docker-compose.

# Stop the container\nsudo docker container stop netalertx\n\n# Remove the container\nsudo docker container rm netalertx\n\n# Remove the old image\nsudo docker image rm netalertx\n\n# Pull and start a new container\nsudo docker-compose up -d\n
"},{"location":"UPDATES/#alternative-force-pull-with-docker-compose","title":"Alternative: Force Pull with Docker Compose","text":"

You can also use --pull always to ensure Docker pulls the latest image before starting the container:

sudo docker-compose up --pull always -d\n
"},{"location":"UPDATES/#2-dockcheck-for-bulk-container-updates","title":"2. Dockcheck for Bulk Container Updates","text":"

Always check the Dockcheck docs if encountering issues with the guide below.

Dockcheck is a useful tool if you have multiple containers to update and some flexibility for handling potential issues that might arise during mass updates. Dockcheck allows you to inspect each container and decide when to update.

"},{"location":"UPDATES/#example-workflow-with-dockcheck","title":"Example Workflow with Dockcheck","text":"

You might use Dockcheck to:

  • Inspect container versions.
  • Pull the latest images in bulk.
  • Apply updates selectively.

Dockcheck can help streamline bulk updates, especially if you\u2019re managing multiple containers.

Below is a script I use to run an update of the Dockcheck script and start a check for new containers:

cd /path/to/Docker &&\nrm dockcheck.sh &&\nwget https://raw.githubusercontent.com/mag37/dockcheck/main/dockcheck.sh &&\nsudo chmod +x dockcheck.sh &&\nsudo ./dockcheck.sh\n
"},{"location":"UPDATES/#3-automated-updates-with-watchtower","title":"3. Automated Updates with Watchtower","text":"

Always check the watchtower docs if encountering issues with the guide below.

Watchtower monitors your Docker containers and automatically updates them when new images are available. This is ideal for ongoing updates without manual intervention.

"},{"location":"UPDATES/#setting-up-watchtower","title":"Setting Up Watchtower","text":""},{"location":"UPDATES/#1-pull-the-watchtower-image","title":"1. Pull the Watchtower Image:","text":"
docker pull containrrr/watchtower\n
"},{"location":"UPDATES/#2-run-watchtower-to-update-all-images","title":"2. Run Watchtower to update all images:","text":"
docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower \\\n  --interval 300 # Check for updates every 5 minutes\n
"},{"location":"UPDATES/#3-run-watchtower-to-update-only-netalertx","title":"3. Run Watchtower to update only NetAlertX:","text":"

You can specify which containers to monitor by listing them. For example, to monitor netalertx only:

docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower netalertx\n\n
"},{"location":"UPDATES/#4-portainer-controlled-image","title":"4. Portainer controlled image","text":"

This assumes you're using Portainer to manage Docker (or Docker Swarm) and want to pull the latest version of an image and redeploy the container.

Note

  • Portainer does not auto-update containers. For automation, use Watchtower or similar tools.
  • Make sure you have the persistent volumes mounted or backups ready before recreating.
"},{"location":"UPDATES/#41-steps-to-update-an-image-in-portainer-standalone-docker","title":"4.1 Steps to Update an Image in Portainer (Standalone Docker)","text":"
  1. Login to Portainer.
  2. Go to \"Containers\" in the left sidebar.
  3. Find the container you want to update, click its name.
  4. Click \"Recreate\" (top right).
  5. Tick: Pull latest image (this ensures Portainer fetches the newest version from Docker Hub or your registry).
  6. Click \"Recreate\" again.
  7. Wait for the container to be stopped, removed, and recreated with the updated image.
"},{"location":"UPDATES/#42-for-docker-swarm-services","title":"4.2 For Docker Swarm Services","text":"

If you're using Docker Swarm (under \"Stacks\" or \"Services\"):

  1. Go to \"Stacks\".
  2. Select the stack managing the container.
  3. Click \"Editor\" (or \"Update the Stack\").
  4. Add a version tag or use :latest if your image tag is latest (not recommended for production).
  5. Click \"Update the Stack\". \u26a0 Portainer will not pull the new image unless the tag changes OR the stack is forced to recreate.
  6. If image tag hasn't changed, go to \"Services\", find the service, and click \"Force Update\".
"},{"location":"UPDATES/#summary","title":"Summary","text":"Method Type Pros Cons Manual CLI Full control, no dependencies Tedious for many containers Dockcheck CLI Script Great for batch updates Needs setup, semi-automated Watchtower Daemonized Fully automated updates Less control, version drift Portainer UI Easy via web interface No auto-updates

These approaches allow you to maintain flexibility in how you update Docker containers, depending on the urgency and scale of the update.

"},{"location":"VERSIONS/","title":"Versions","text":""},{"location":"VERSIONS/#am-i-running-the-latest-released-version","title":"Am I running the latest released version?","text":"

Since version 23.01.14 NetAlertX uses a simple timestamp-based version check to verify if a new version is available. You can check the current and past releases here, or have a look at what I'm currently working on.

If you are not on the latest version, the app will notify you, that a new released version is avialable the following way:

"},{"location":"VERSIONS/#via-email-on-a-notification-event","title":"\ud83d\udce7 Via email on a notification event","text":"

If any notification occurs and an email is sent, the email will contain a note that a new version is available. See the sample email below:

"},{"location":"VERSIONS/#in-the-ui","title":"\ud83c\udd95 In the UI","text":"

In the UI via a notification Icon and via a custom message in the Maintenance section.

For a comparison, this is how the UI looks like if you are on the latest stable image:

"},{"location":"VERSIONS/#implementation-details","title":"Implementation details","text":"

During build a /app/front/buildtimestamp.txt file is created. The app then periodically checks if a new release is available with a newer timestamp in GitHub's rest-based JSON endpoint (check the def isNewVersion: method for details).

"},{"location":"WEBHOOK_N8N/","title":"Webhooks (n8n)","text":""},{"location":"WEBHOOK_N8N/#create-a-simple-n8n-workflow","title":"Create a simple n8n workflow","text":"

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

N8N can be used for more advanced conditional notification use cases. For example, you want only to get notified if two out of a specified list of devices is down. Or you can use other plugins to process the notifiations further. The below is a simple example of sending an email on a webhook.

"},{"location":"WEBHOOK_N8N/#specify-your-email-template","title":"Specify your email template","text":"

See sample JSON if you want to see the JSON paths used in the email template below

Events count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"events\"].length }}\nNew devices count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"new_devices\"].length }}\n
"},{"location":"WEBHOOK_N8N/#get-your-webhook-in-n8n","title":"Get your webhook in n8n","text":""},{"location":"WEBHOOK_N8N/#configure-netalertx-to-point-to-the-above-url","title":"Configure NetAlertX to point to the above URL","text":""},{"location":"WEBHOOK_SECRET/","title":"Webhook Secrets","text":"

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

"},{"location":"WEBHOOK_SECRET/#how-does-the-signing-work","title":"How does the signing work?","text":"

NetAlertX will use the configured secret to create a hash signature of the request body. This SHA256-HMAC signature will appear in the X-Webhook-Signature header of each request to the webhook target URL. You can use the value of this header to validate the request was sent by NetAlertX.

"},{"location":"WEBHOOK_SECRET/#activating-webhook-signatures","title":"Activating webhook signatures","text":"

All you need to do in order to add a signature to the request headers is to set the WEBHOOK_SECRET config value to a non-empty string.

"},{"location":"WEBHOOK_SECRET/#validating-webhook-deliveries","title":"Validating webhook deliveries","text":"

There are a few things to keep in mind when validating the webhook delivery:

  • NetAlertX uses an HMAC hex digest to compute the hash
  • The signature in the X-Webhook-Signature header always starts with sha256=
  • The hash signature is generated using the configured WEBHOOK_SECRET and the request body.
  • Never use a plain == operator. Instead, consider using a method like secure_compare or crypto.timingSafeEqual, which performs a \"constant time\" string comparison to help mitigate certain timing attacks against regular equality operators, or regular loops in JIT-optimized languages.
"},{"location":"WEBHOOK_SECRET/#testing-the-webhook-payload-validation","title":"Testing the webhook payload validation","text":"

You can use the following secret and payload to verify that your implementation is working correctly.

secret: 'this is my secret'

payload: '{\"test\":\"this is a test body\"}'

If your implementation is correct, the signature you generated should match the following:

signature: bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

X-Webhook-Signature: sha256=bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

"},{"location":"WEBHOOK_SECRET/#more-information","title":"More information","text":"

If you want to learn more about webhook security, take a look at GitHub's webhook documentation.

You can find examples for validating a webhook delivery here.

"},{"location":"WEB_UI_PORT_DEBUG/","title":"Debugging inaccessible UI","text":"

The application uses the following default ports:

  • Web UI: 20211
  • GraphQL API: 20212

The Web UI is served by an nginx server, while the API backend runs on a Flask (Python) server.

"},{"location":"WEB_UI_PORT_DEBUG/#changing-ports","title":"Changing Ports","text":"
  • To change the Web UI port, update the PORT environment variable in the docker-compose.yml file.
  • To change the GraphQL API port, use the GRAPHQL_PORT setting, either directly or via Docker: yaml APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20212\"}

For more information, check the Docker installation guide.

"},{"location":"WEB_UI_PORT_DEBUG/#possible-issues-and-troubleshooting","title":"Possible issues and troubleshooting","text":"

Follow all of the below in order to disqualify potential causes of issues and to troubleshoot these problems faster.

"},{"location":"WEB_UI_PORT_DEBUG/#1-port-conflicts","title":"1. Port conflicts","text":"

When opening an issue or debugging:

  1. Include a screenshot of what you see when accessing HTTP://<your_server>:20211 (or your custom port)
  2. Follow steps 1, 2, 3, 4 on this page
  3. Execute the following in the container to see the processes and their ports and submit a screenshot of the result:
  4. sudo apk add lsof
  5. sudo lsof -i
  6. Try running the nginx command in the container:
  7. if you get nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) try using a different port number

"},{"location":"WEB_UI_PORT_DEBUG/#2-javascript-issues","title":"2. JavaScript issues","text":"

Check for browser console (F12 browser dev console) errors + check different browsers.

"},{"location":"WEB_UI_PORT_DEBUG/#3-clear-the-app-cache-and-cached-javascript-files","title":"3. Clear the app cache and cached JavaScript files","text":"

Refresh the browser cache (usually shoft + refresh), try a private window, or different browsers. Please also refresh the app cache by clicking the \ud83d\udd03 (reload) button in the header of the application.

"},{"location":"WEB_UI_PORT_DEBUG/#4-disable-proxies","title":"4. Disable proxies","text":"

If you have any reverse proxy or similar, try disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#5-disable-your-firewall","title":"5. Disable your firewall","text":"

If you are using a firewall, try to temporarily disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#6-post-your-docker-start-details","title":"6. Post your docker start details","text":"

If you haven't, post your docker compose/run command.

"},{"location":"WEB_UI_PORT_DEBUG/#7-check-for-errors-in-your-phpnginx-error-logs","title":"7. Check for errors in your PHP/NGINX error logs","text":"

In the container execute and investigate:

cat /var/log/nginx/error.log

cat /tmp/log/app.php_errors.log

"},{"location":"WEB_UI_PORT_DEBUG/#8-make-sure-permissions-are-correct","title":"8. Make sure permissions are correct","text":"

Tip

You can try to start the container without mapping the /data/config and /data/db dirs and if the UI shows up then the issue is most likely related to your file system permissions or file ownership.

Please read the Permissions troubleshooting guide and provide a screesnhot of the permissions and ownership in the /data/db and app/config directories.

"},{"location":"WORKFLOWS/","title":"Workflows Overview","text":"

The workflows module in allows to automate repetitive tasks, making network management more efficient. Whether you need to assign newly discovered devices to a specific Network Node, auto-group devices from a given vendor, unarchive a device if detected online, or automatically delete devices, this module provides the flexibility to tailor the automations to your needs.

Below are a few examples that demonstrate how this module can be used to simplify network management tasks.

"},{"location":"WORKFLOWS/#updating-workflows","title":"Updating Workflows","text":"

Note

In order to apply a workflow change, you must first Save the changes and then reload the application by clicking Restart server.

"},{"location":"WORKFLOWS/#workflow-components","title":"Workflow components","text":""},{"location":"WORKFLOWS/#triggers","title":"Triggers","text":"

Triggers define the event that activates a workflow. They monitor changes to objects within the system, such as updates to devices or the insertion of new entries. When the specified event occurs, the workflow is executed.

Tip

Workflows not running? Check the Workflows debugging guide how to troubleshoot triggers and conditions.

"},{"location":"WORKFLOWS/#example-trigger","title":"Example Trigger:","text":"
  • Object Type: Devices
  • Event Type: update

This trigger will activate when a Device object is updated.

"},{"location":"WORKFLOWS/#conditions","title":"Conditions","text":"

Conditions determine whether a workflow should proceed based on certain criteria. These criteria can be set for specific fields, such as whether a device is from a certain vendor, or whether it is new or archived. You can combine conditions using logical operators (AND, OR).

Tip

To better understand how to use specific Device fields, please read through the Database overview guide.

"},{"location":"WORKFLOWS/#example-condition","title":"Example Condition:","text":"
  • Logic: AND
  • Field: devVendor
  • Operator: contains (case in-sensitive)
  • Value: Google

This condition checks if the device's vendor is Google. The workflow will only proceed if the condition is true.

"},{"location":"WORKFLOWS/#actions","title":"Actions","text":"

Actions define the tasks that the workflow will perform once the conditions are met. Actions can include updating fields or deleting devices.

You can include multiple actions that should execute once the conditions are met.

"},{"location":"WORKFLOWS/#example-action","title":"Example Action:","text":"
  • Action Type: update_field
  • Field: devIsNew
  • Value: 0

This action updates the devIsNew field to 0, marking the device as no longer new.

"},{"location":"WORKFLOWS/#examples","title":"Examples","text":"

You can find a couple of configuration examples in Workflow Examples.

Tip

Share your workflows in Discord or GitHub Discussions.

"},{"location":"WORKFLOWS_DEBUGGING/","title":"Workflows debugging and troubleshooting","text":"

Tip

Before troubleshooting, please ensure you have the right Debugging and LOG_LEVEL set.

Workflows are triggered by various events. These events are captured and listed in the Integrations -> App Events section of the application.

"},{"location":"WORKFLOWS_DEBUGGING/#troubleshooting-triggers","title":"Troubleshooting triggers","text":"

Note

Workflow events are processed once every 5 seconds. However, if a scan or other background tasks are running, this can cause a delay up to a few minutes.

If an event doesn't trigger a workflow as expected, check the App Events section for the event. You can filter these by the ID of the device (devMAC or devGUID).

Once you find the Event Guid and Object GUID, use them to find relevant debug entries.

Navigate to Mainetenace -> Logs where you can filter the logs based on the Event or Object GUID.

Below you can find some example app.log entries that will help you understand why a Workflow was or was not triggered.

16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Sample Device Update Workflow'\n16:27:03 [WF] self.triggered 'False' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"insert\"}'\n16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Location Change'\n16:27:03 [WF] self.triggered 'True' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"update\"}'\n16:27:03 [WF] Event with GUID '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggered the workflow 'Location Change'\n

Note how one trigger executed, but the other didn't based on different \"event_type\" values. One is \"event_type\": \"insert\", the other \"event_type\": \"update\".

Given the Event is a update event (note ...['online'], ['update'], [None]... in the event structure), the \"event_type\": \"insert\" trigger didn't execute.

"},{"location":"WORKFLOW_EXAMPLES/","title":"Workflow examples","text":"

Workflows in NetAlertX automate actions based on real-time events and conditions. Below are practical examples that demonstrate how to build automation using triggers, conditions, and actions.

"},{"location":"WORKFLOW_EXAMPLES/#example-1-un-archive-devices-if-detected-online","title":"Example 1: Un-archive devices if detected online","text":"

This workflow automatically unarchives a device if it was previously archived but has now been detected as online.

"},{"location":"WORKFLOW_EXAMPLES/#use-case","title":"\ud83d\udccb Use Case","text":"

Sometimes devices are manually archived (e.g., no longer expected on the network), but they reappear unexpectedly. This workflow reverses the archive status when such devices are detected during a scan.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Un-archive devices if detected online\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devIsArchived\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        },\n        {\n          \"field\": \"devPresentLastScan\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsArchived\",\n      \"value\": \"0\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation","title":"\ud83d\udd0d Explanation","text":"
- Trigger: Listens for updates to device records.\n- Conditions:\n    - `devIsArchived` is `1` (archived).\n    - `devPresentLastScan` is `1` (device was detected in the latest scan).\n- Action: Updates the device to set `devIsArchived` to `0` (unarchived).\n
"},{"location":"WORKFLOW_EXAMPLES/#result","title":"\u2705 Result","text":"

Whenever a previously archived device shows up during a network scan, it will be automatically unarchived \u2014 allowing it to reappear in your device lists and dashboards.

Here is your updated version of Example 2 and Example 3, fully aligned with the format and structure of Example 1 for consistency and professionalism:

"},{"location":"WORKFLOW_EXAMPLES/#example-2-assign-device-to-network-node-based-on-ip","title":"Example 2: Assign Device to Network Node Based on IP","text":"

This workflow assigns newly added devices with IP addresses in the 192.168.1.* range to a specific network node with MAC address 6c:6d:6d:6c:6c:6c.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_1","title":"\ud83d\udccb Use Case","text":"

When new devices join your network, assigning them to the correct network node is important for accurate topology and grouping. This workflow ensures devices in a specific subnet are automatically linked to the intended node.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_1","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Assign Device to Network Node Based on IP\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"insert\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devLastIP\",\n          \"operator\": \"contains\",\n          \"value\": \"192.168.1.\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devNetworkNode\",\n      \"value\": \"6c:6d:6d:6c:6c:6c\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_1","title":"\ud83d\udd0d Explanation","text":"
  • Trigger: Activates when a new device is added.
  • Condition:

  • devLastIP contains 192.168.1. (matches subnet).

  • Action:

  • Sets devNetworkNode to the specified MAC address.

"},{"location":"WORKFLOW_EXAMPLES/#result_1","title":"\u2705 Result","text":"

New devices with IPs in the 192.168.1.* subnet are automatically assigned to the correct network node, streamlining device organization and reducing manual work.

"},{"location":"WORKFLOW_EXAMPLES/#example-3-mark-device-as-not-new-and-delete-if-from-google-vendor","title":"Example 3: Mark Device as Not New and Delete If from Google Vendor","text":"

This workflow automatically marks newly detected Google devices as not new and deletes them immediately.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_2","title":"\ud83d\udccb Use Case","text":"

You may want to automatically clear out newly detected Google devices (such as Chromecast or Google Home) if they\u2019re not needed in your device database. This workflow handles that clean-up automatically.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_2","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Mark Device as Not New and Delete If from Google Vendor\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devVendor\",\n          \"operator\": \"contains\",\n          \"value\": \"Google\"\n        },\n        {\n          \"field\": \"devIsNew\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsNew\",\n      \"value\": \"0\"\n    },\n    {\n      \"type\": \"delete_device\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_2","title":"\ud83d\udd0d Explanation","text":"
  • Trigger: Runs on device updates.
  • Conditions:

  • Vendor contains Google.

  • Device is marked as new (devIsNew is 1).
  • Actions:

  • Set devIsNew to 0 (mark as not new).

  • Delete the device.
"},{"location":"WORKFLOW_EXAMPLES/#result_2","title":"\u2705 Result","text":"

Any newly detected Google devices are cleaned up instantly \u2014 first marked as not new, then deleted \u2014 helping you avoid clutter in your device records.

"},{"location":"docker-troubleshooting/excessive-capabilities/","title":"Excessive Capabilities","text":""},{"location":"docker-troubleshooting/excessive-capabilities/#issue-description","title":"Issue Description","text":"

Excessive Linux capabilities are detected beyond the necessary NET_ADMIN, NET_BIND_SERVICE, and NET_RAW. This may indicate overly permissive container configuration.

"},{"location":"docker-troubleshooting/excessive-capabilities/#security-ramifications","title":"Security Ramifications","text":"

While the detected capabilities might not directly harm operation, running with more privileges than necessary increases the attack surface. If the container is compromised, additional capabilities could allow broader system access or privilege escalation.

"},{"location":"docker-troubleshooting/excessive-capabilities/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker configuration grants more capabilities than required for network monitoring. The application only needs specific network-related capabilities for proper function.

"},{"location":"docker-troubleshooting/excessive-capabilities/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Limit capabilities to only those required:

  • In docker-compose.yml, specify only needed caps: ```yaml cap_add:
    • NET_RAW
    • NET_ADMIN
    • NET_BIND_SERVICE ```
  • Remove any unnecessary --cap-add or --privileged flags from docker run commands
"},{"location":"docker-troubleshooting/excessive-capabilities/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/file-permissions/","title":"File Permission Issues","text":""},{"location":"docker-troubleshooting/file-permissions/#issue-description","title":"Issue Description","text":"

NetAlertX cannot read from or write to critical configuration and database files. This prevents the application from saving data, logs, or configuration changes.

"},{"location":"docker-troubleshooting/file-permissions/#security-ramifications","title":"Security Ramifications","text":"

Incorrect file permissions can expose sensitive configuration data or database contents to unauthorized access. Network monitoring tools handle sensitive information about devices on your network, and improper permissions could lead to information disclosure.

"},{"location":"docker-troubleshooting/file-permissions/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the mounted volumes for configuration and database files don't have proper ownership or permissions set for the netalertx user (UID 20211). The container expects these files to be accessible by the service account, not root or other users.

"},{"location":"docker-troubleshooting/file-permissions/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Fix permissions on the host system for the mounted directories:

  • Ensure the config and database directories are owned by the netalertx user: chown -R 20211:20211 /path/to/config /path/to/db
  • Set appropriate permissions: chmod -R 755 /path/to/config /path/to/db for directories, chmod 644 for files
  • Alternatively, restart the container with root privileges temporarily to allow automatic permission fixing, then switch back to the default user
"},{"location":"docker-troubleshooting/file-permissions/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/incorrect-user/","title":"Incorrect Container User","text":""},{"location":"docker-troubleshooting/incorrect-user/#issue-description","title":"Issue Description","text":"

NetAlertX is running as UID:GID other than the expected 20211:20211. This bypasses hardened permissions, file ownership, and runtime isolation safeguards.

"},{"location":"docker-troubleshooting/incorrect-user/#security-ramifications","title":"Security Ramifications","text":"

The application is designed with security hardening that depends on running under a dedicated, non-privileged service account. Using a different user account can silently fail future upgrades and removes crucial isolation between the container and host system.

"},{"location":"docker-troubleshooting/incorrect-user/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when you override the container's default user with custom user: directives in docker-compose.yml or --user flags in docker run commands. The container expects to run as the netalertx user for proper security isolation.

"},{"location":"docker-troubleshooting/incorrect-user/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Restore the container to the default user:

  • Remove any user: overrides from docker-compose.yml
  • Avoid --user flags in docker run commands
  • Allow the container to run with its default UID:GID 20211:20211
  • Recreate the container so volume ownership is reset automatically
"},{"location":"docker-troubleshooting/incorrect-user/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/missing-capabilities/","title":"Missing Network Capabilities","text":""},{"location":"docker-troubleshooting/missing-capabilities/#issue-description","title":"Issue Description","text":"

Raw network capabilities (NET_RAW, NET_ADMIN, NET_BIND_SERVICE) are missing. Tools that rely on these capabilities (e.g., nmap -sS, arp-scan, nbtscan) will not function.

"},{"location":"docker-troubleshooting/missing-capabilities/#security-ramifications","title":"Security Ramifications","text":"

Network scanning and monitoring requires low-level network access that these capabilities provide. Without them, the application cannot perform essential functions like ARP scanning, port scanning, or passive network discovery, severely limiting its effectiveness.

"},{"location":"docker-troubleshooting/missing-capabilities/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the container doesn't have the necessary Linux capabilities granted. Docker containers run with limited capabilities by default, and network monitoring tools need elevated network privileges.

"},{"location":"docker-troubleshooting/missing-capabilities/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Add the required capabilities to your container:

  • In docker-compose.yml: ```yaml cap_add:
    • NET_RAW
    • NET_ADMIN
    • NET_BIND_SERVICE ```
  • For docker run: --cap-add=NET_RAW --cap-add=NET_ADMIN --cap-add=NET_BIND_SERVICE
"},{"location":"docker-troubleshooting/missing-capabilities/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/mount-configuration-issues/","title":"Mount Configuration Issues","text":""},{"location":"docker-troubleshooting/mount-configuration-issues/#issue-description","title":"Issue Description","text":"

NetAlertX has detected configuration issues with your Docker volume mounts. These may include write permission problems, data loss risks, or performance concerns marked with \u274c in the table.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#security-ramifications","title":"Security Ramifications","text":"

Improper mount configurations can lead to data loss, performance degradation, or security vulnerabilities. For persistent data (database and configuration), using non-persistent storage like tmpfs can result in complete data loss on container restart. For temporary data, using persistent storage may unnecessarily expose sensitive logs or cache data.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker Compose or run configuration doesn't properly map host directories to container paths, or when the mounted volumes have incorrect permissions. The application requires specific paths to be writable for operation, and some paths should use persistent storage while others should be temporary.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Review and correct your volume mounts in docker-compose.yml:

  • Ensure ${NETALERTX_DB} and ${NETALERTX_CONFIG} use persistent host directories
  • Ensure ${NETALERTX_API}, ${NETALERTX_LOG} have appropriate permissions
  • Avoid mounting sensitive paths to non-persistent filesystems like tmpfs for critical data
  • Use bind mounts with proper ownership (netalertx user: 20211:20211)

Example volume configuration:

volumes:\n  - ./data/db:/data/db\n  - ./data/config:/data/config\n  - ./data/log:/tmp/log\n
"},{"location":"docker-troubleshooting/mount-configuration-issues/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/network-mode/","title":"Network Mode Configuration","text":""},{"location":"docker-troubleshooting/network-mode/#issue-description","title":"Issue Description","text":"

NetAlertX is not running with --network=host. Bridge networking blocks passive discovery (ARP, NBNS, mDNS) and active scanning accuracy.

"},{"location":"docker-troubleshooting/network-mode/#security-ramifications","title":"Security Ramifications","text":"

Host networking is required for comprehensive network monitoring. Bridge mode isolates the container from raw network access needed for ARP scanning, passive discovery protocols, and accurate device detection. Without host networking, the application cannot fully monitor your network.

"},{"location":"docker-troubleshooting/network-mode/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker configuration uses bridge networking instead of host networking. Network monitoring requires direct access to the host's network interfaces to perform passive discovery and active scanning.

"},{"location":"docker-troubleshooting/network-mode/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Enable host networking mode:

  • In docker-compose.yml, add: network_mode: host
  • For docker run, use: --network=host
  • Ensure the container has required capabilities: --cap-add=NET_RAW --cap-add=NET_ADMIN --cap-add=NET_BIND_SERVICE
"},{"location":"docker-troubleshooting/network-mode/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/nginx-configuration-mount/","title":"Nginx Configuration Mount Issues","text":""},{"location":"docker-troubleshooting/nginx-configuration-mount/#issue-description","title":"Issue Description","text":"

You've configured a custom port for NetAlertX, but the required nginx configuration mount is missing or not writable. Without this mount, the container cannot apply your port changes and will fall back to the default port 20211.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#security-ramifications","title":"Security Ramifications","text":"

Running in read-only mode (as recommended) prevents the container from modifying its own nginx configuration. Without a writable mount, custom port configurations cannot be applied, potentially exposing the service on unintended ports or requiring fallback to defaults.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when you set a custom PORT environment variable (other than 20211) but haven't provided a writable mount for nginx configuration. The container needs to write custom nginx config files when running in read-only mode.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

If you want to use a custom port, create a bind mount for the nginx configuration:

  • Create a directory on your host: mkdir -p /path/to/nginx-config
  • Add to your docker-compose.yml: ```yaml volumes:
    • /path/to/nginx-config:/tmp/nginx/active-config environment:
    • PORT=your_custom_port ```
  • Ensure it's owned by the netalertx user: chown -R 20211:20211 /path/to/nginx-config
  • Set permissions: chmod -R 700 /path/to/nginx-config

If you don't need a custom port, simply omit the PORT environment variable and the container will use 20211 by default.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/port-conflicts/","title":"Port Conflicts","text":""},{"location":"docker-troubleshooting/port-conflicts/#issue-description","title":"Issue Description","text":"

The configured application port (default 20211) or GraphQL API port (default 20212) is already in use by another service. This commonly occurs when you already have another NetAlertX instance running.

"},{"location":"docker-troubleshooting/port-conflicts/#security-ramifications","title":"Security Ramifications","text":"

Port conflicts prevent the application from starting properly, leaving network monitoring services unavailable. Running multiple instances on the same ports can also create configuration confusion and potential security issues if services are inadvertently exposed.

"},{"location":"docker-troubleshooting/port-conflicts/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This error typically occurs when:

  • You already have NetAlertX running - Another Docker container or devcontainer instance is using the default ports 20211 and 20212
  • Port conflicts with other services - Other applications on your system are using these ports
  • Configuration error - Both PORT and GRAPHQL_PORT environment variables are set to the same value
"},{"location":"docker-troubleshooting/port-conflicts/#how-to-correct-the-issue","title":"How to Correct the Issue","text":""},{"location":"docker-troubleshooting/port-conflicts/#check-for-existing-netalertx-instances","title":"Check for Existing NetAlertX Instances","text":"

First, check if you already have NetAlertX running:

# Check for running NetAlertX containers\ndocker ps | grep netalertx\n\n# Check for devcontainer processes\nps aux | grep netalertx\n\n# Check what services are using the ports\nnetstat -tlnp | grep :20211\nnetstat -tlnp | grep :20212\n
"},{"location":"docker-troubleshooting/port-conflicts/#stop-conflicting-instances","title":"Stop Conflicting Instances","text":"

If you find another NetAlertX instance:

# Stop specific container\ndocker stop <container_name>\n\n# Stop all NetAlertX containers\ndocker stop $(docker ps -q --filter ancestor=jokob-sk/netalertx)\n\n# Stop devcontainer services\n# Use VS Code command palette: \"Dev Containers: Rebuild Container\"\n
"},{"location":"docker-troubleshooting/port-conflicts/#configure-different-ports","title":"Configure Different Ports","text":"

If you need multiple instances, configure unique ports:

environment:\n  - PORT=20211          # Main application port\n  - GRAPHQL_PORT=20212  # GraphQL API port\n

For a second instance, use different ports:

environment:\n  - PORT=20213          # Different main port\n  - GRAPHQL_PORT=20214  # Different API port\n
"},{"location":"docker-troubleshooting/port-conflicts/#alternative-use-different-container-names","title":"Alternative: Use Different Container Names","text":"

When running multiple instances, use unique container names:

services:\n  netalertx-primary:\n    # ... existing config\n  netalertx-secondary:\n    # ... config with different ports\n
"},{"location":"docker-troubleshooting/port-conflicts/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/read-only-filesystem/","title":"Read-Only Filesystem Mode","text":""},{"location":"docker-troubleshooting/read-only-filesystem/#issue-description","title":"Issue Description","text":"

The container is running as read-write instead of read-only mode. This reduces the security hardening of the appliance.

"},{"location":"docker-troubleshooting/read-only-filesystem/#security-ramifications","title":"Security Ramifications","text":"

Read-only root filesystem is a security best practice that prevents malicious modifications to the container's filesystem. Running read-write allows potential attackers to modify system files or persist malware within the container.

"},{"location":"docker-troubleshooting/read-only-filesystem/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the Docker configuration doesn't mount the root filesystem as read-only. The application is designed as a security appliance that should prevent filesystem modifications.

"},{"location":"docker-troubleshooting/read-only-filesystem/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Enable read-only mode:

  • In docker-compose.yml, add: read_only: true
  • For docker run, use: --read-only
  • Ensure necessary directories are mounted as writable volumes (tmp, logs, etc.)
"},{"location":"docker-troubleshooting/read-only-filesystem/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/running-as-root/","title":"Running as Root User","text":""},{"location":"docker-troubleshooting/running-as-root/#issue-description","title":"Issue Description","text":"

NetAlertX has detected that the container is running with root privileges (UID 0). This configuration bypasses all built-in security hardening measures designed to protect your system.

"},{"location":"docker-troubleshooting/running-as-root/#security-ramifications","title":"Security Ramifications","text":"

Running security-critical applications like network monitoring tools as root grants unrestricted access to your host system. A successful compromise here could jeopardize your entire infrastructure, including other containers, host services, and potentially your network.

"},{"location":"docker-troubleshooting/running-as-root/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This typically occurs when you've explicitly overridden the container's default user in your Docker configuration, such as using user: root or --user 0:0 in docker-compose.yml or docker run commands. The application is designed to run under a dedicated, non-privileged service account for security.

"},{"location":"docker-troubleshooting/running-as-root/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Switch to the dedicated 'netalertx' user by removing any custom user directives:

  • Remove user: entries from your docker-compose.yml
  • Avoid --user flags in docker run commands
  • Ensure the container runs with the default UID 20211:20211

After making these changes, restart the container. The application will automatically adjust ownership of required directories.

"},{"location":"docker-troubleshooting/running-as-root/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..5234f9ba --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,363 @@ + + + + https://jokob-sk.github.io/NetAlertX/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_DBQUERY/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_DEVICE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_DEVICES/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_EVENTS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_GRAPHQL/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_LOGS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_MESSAGING_IN_APP/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_METRICS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_NETTOOLS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_OLD/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_ONLINEHISTORY/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_SESSIONS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_SETTINGS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_SYNC/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/API_TESTS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/AUTHELIA/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/BACKUPS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/BUILDS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/COMMON_ISSUES/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/COMMUNITY_GUIDES/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/CUSTOM_PROPERTIES/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DATABASE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEBUG_API_SERVER/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEBUG_INVALID_JSON/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEBUG_PHP/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEBUG_PLUGINS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEBUG_TIPS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEVICES_BULK_EDITING/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEVICE_DISPLAY_SETTINGS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEVICE_HEURISTICS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEVICE_MANAGEMENT/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEV_DEVCONTAINER/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEV_ENV_SETUP/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DEV_PORTS_HOST_MODE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DOCKER_COMPOSE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DOCKER_INSTALLATION/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DOCKER_MAINTENANCE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DOCKER_PORTAINER/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/DOCKER_SWARM/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/FILE_PERMISSIONS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/FIX_OFFLINE_DETECTION/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/FRONTEND_DEVELOPMENT/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/HELPER_SCRIPTS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/HOME_ASSISTANT/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/HW_INSTALL/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/ICONS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/INITIAL_SETUP/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/INSTALLATION/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/LOGGING/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/MIGRATION/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/NAME_RESOLUTION/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/NETWORK_TREE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/NOTIFICATIONS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/PERFORMANCE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/PIHOLE_GUIDE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/PLUGINS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/PLUGINS_DEV/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/PLUGINS_DEV_CONFIG/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/RANDOM_MAC/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/REMOTE_NETWORKS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/REVERSE_DNS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/REVERSE_PROXY/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/SECURITY/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/SECURITY_FEATURES/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/SESSION_INFO/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/SETTINGS_SYSTEM/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/SMTP/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/SUBNETS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/SYNOLOGY_GUIDE/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/UPDATES/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/VERSIONS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/WEBHOOK_N8N/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/WEBHOOK_SECRET/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/WEB_UI_PORT_DEBUG/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/WORKFLOWS/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/WORKFLOWS_DEBUGGING/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/WORKFLOW_EXAMPLES/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/excessive-capabilities/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/file-permissions/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/incorrect-user/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/missing-capabilities/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/mount-configuration-issues/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/network-mode/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/nginx-configuration-mount/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/port-conflicts/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/read-only-filesystem/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/running-as-root/ + 2025-12-03 + + + https://jokob-sk.github.io/NetAlertX/docker-troubleshooting/troubleshooting/ + 2025-12-03 + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..1561811a Binary files /dev/null and b/sitemap.xml.gz differ
+ + +