{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"NetAlertX Documentation","text":"

Welcome to the official NetAlertX documentation! NetAlertX is a powerful tool designed to simplify the management and monitoring of your network. Below, you will find guides and resources to help you set up, configure, and troubleshoot your NetAlertX instance.

"},{"location":"#in-app-help","title":"In-App Help","text":"

NetAlertX provides contextual help within the application:

"},{"location":"#installation-guides","title":"Installation Guides","text":"

The app can be installed different ways, with the best support of the docker-based deployments. This includes the Home Assistant and Unraid installation approaches. See details below.

"},{"location":"#docker-fully-supported","title":"Docker (Fully Supported)","text":"

NetAlertX is fully supported in Docker environments, allowing for easy setup and configuration. Follow the official guide to get started:

This guide will take you through the process of setting up NetAlertX using Docker Compose or standalone Docker commands.

"},{"location":"#home-assistant-fully-supported","title":"Home Assistant (Fully Supported)","text":"

You can install NetAlertX also as a Home Assistant addon via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

"},{"location":"#unraid-partial-support","title":"Unraid (Partial Support)","text":"

The Unraid template was created by the community, so it's only partially supported. Alternatively, here is another version of the Unraid template.

"},{"location":"#bare-metal-installation-experimental","title":"Bare-Metal Installation (Experimental)","text":"

If you prefer to run NetAlertX on your own hardware, you can try the experimental bare-metal installation. Please note that this method is still under development, and are looking for maintainers to help improve it.

"},{"location":"#help-and-support","title":"Help and Support","text":"

If you need help or run into issues, here are some resources to guide you:

Before opening an issue, please:

Need more help? Join the community discussions or submit a support request:

"},{"location":"#contributing","title":"Contributing","text":"

NetAlertX is open-source and welcomes contributions from the community! If you'd like to help improve the software, please follow the guidelines below:

For more information on contributing, check out our Dev Guide.

"},{"location":"#stay-updated","title":"Stay Updated","text":"

To keep up with the latest changes and updates to NetAlertX, please refer to the following resources:

Make sure to follow the project on GitHub to get notifications for new releases and important updates.

"},{"location":"#additional-info","title":"Additional info","text":"

If you have any suggestions or improvements, please don\u2019t hesitate to contribute!

NetAlertX is actively maintained. You can find the source code, report bugs, or request new features on our GitHub page.

"},{"location":"API/","title":"NetAlertX API Documentation","text":"

This API provides programmatic access to devices, events, sessions, metrics, network tools, and sync in NetAlertX. It is implemented as a REST and GraphQL server. All requests require authentication via API Token (API_TOKEN setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a Authorization: Bearer API_TOKEN header as per example below:

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n

The API server runs on 0.0.0.0:<graphql_port> with CORS enabled for all main endpoints.

"},{"location":"API/#authentication","title":"Authentication","text":"

All endpoints require an API token provided in the HTTP headers:

Authorization: Bearer <API_TOKEN>\n

If the token is missing or invalid, the server will return:

{ \"error\": \"Forbidden\" }\n
"},{"location":"API/#base-url","title":"Base URL","text":"
http://<server>:<GRAPHQL_PORT>/\n
"},{"location":"API/#endpoints","title":"Endpoints","text":"

Tip

When retrieving devices or settings try using the GraphQL API endpoint first as it is read-optimized.

See Testing for example requests and usage.

"},{"location":"API/#notes","title":"Notes","text":""},{"location":"API_DBQUERY/","title":"Database Query API","text":"

The Database Query API provides direct, low-level access to the NetAlertX database. It allows read, write, update, and delete operations against tables, using base64-encoded SQL or structured parameters.

Warning

This API is primarily used internally to generate and render the application UI. These endpoints are low-level and powerful, and should be used with caution. Wherever possible, prefer the standard API endpoints. Invalid or unsafe queries can corrupt data. If you need data in a specific format that is not already provided, please open an issue or pull request with a clear, broadly useful use case. This helps ensure new endpoints benefit the wider community rather than relying on raw database queries.

"},{"location":"API_DBQUERY/#authentication","title":"Authentication","text":"

All /dbquery/* endpoints require an API token in the HTTP headers:

Authorization: Bearer <API_TOKEN>\n

If the token is missing or invalid:

{ \"error\": \"Forbidden\" }\n
"},{"location":"API_DBQUERY/#endpoints","title":"Endpoints","text":""},{"location":"API_DBQUERY/#1-post-dbqueryread","title":"1. POST /dbquery/read","text":"

Execute a read-only SQL query (e.g., SELECT).

"},{"location":"API_DBQUERY/#request-body","title":"Request Body","text":"
{\n  \"rawSql\": \"U0VMRUNUICogRlJPTSBERVZJQ0VT\"   // base64 encoded SQL\n}\n

Decoded SQL:

SELECT * FROM Devices;\n
"},{"location":"API_DBQUERY/#response","title":"Response","text":"
{\n  \"success\": true,\n  \"results\": [\n    { \"devMac\": \"AA:BB:CC:DD:EE:FF\", \"devName\": \"Phone\" }\n  ]\n}\n
"},{"location":"API_DBQUERY/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/read\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"rawSql\": \"U0VMRUNUICogRlJPTSBERVZJQ0VT\"\n  }'\n
"},{"location":"API_DBQUERY/#2-post-dbqueryupdate-safer-than-dbquerywrite","title":"2. POST /dbquery/update (safer than /dbquery/write)","text":"

Update rows in a table by columnName + id. /dbquery/update is parameterized to reduce the risk of SQL injection, while /dbquery/write executes raw SQL directly.

"},{"location":"API_DBQUERY/#request-body_1","title":"Request Body","text":"
{\n  \"columnName\": \"devMac\",\n  \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n  \"dbtable\": \"Devices\",\n  \"columns\": [\"devName\", \"devOwner\"],\n  \"values\": [\"Laptop\", \"Alice\"]\n}\n
"},{"location":"API_DBQUERY/#response_1","title":"Response","text":"
{ \"success\": true, \"updated_count\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_1","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/update\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"columnName\": \"devMac\",\n    \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n    \"dbtable\": \"Devices\",\n    \"columns\": [\"devName\", \"devOwner\"],\n    \"values\": [\"Laptop\", \"Alice\"]\n  }'\n
"},{"location":"API_DBQUERY/#3-post-dbquerywrite","title":"3. POST /dbquery/write","text":"

Execute a write query (INSERT, UPDATE, DELETE).

"},{"location":"API_DBQUERY/#request-body_2","title":"Request Body","text":"
{\n  \"rawSql\": \"SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk=\"\n}\n

Decoded SQL:

INSERT INTO Devices (devMac, devName, devFirstConnection, devLastConnection, devLastIP)\nVALUES ('6A:BB:4C:5D:6E', 'TestDevice', '2025-08-30 12:00:00', '2025-08-30 12:00:00', '10.0.0.10');\n
"},{"location":"API_DBQUERY/#response_2","title":"Response","text":"
{ \"success\": true, \"affected_rows\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_2","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/write\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"rawSql\": \"SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk=\"\n  }'\n
"},{"location":"API_DBQUERY/#4-post-dbquerydelete","title":"4. POST /dbquery/delete","text":"

Delete rows in a table by columnName + id.

"},{"location":"API_DBQUERY/#request-body_3","title":"Request Body","text":"
{\n  \"columnName\": \"devMac\",\n  \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n  \"dbtable\": \"Devices\"\n}\n
"},{"location":"API_DBQUERY/#response_3","title":"Response","text":"
{ \"success\": true, \"deleted_count\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_3","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"columnName\": \"devMac\",\n    \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n    \"dbtable\": \"Devices\"\n  }'\n
"},{"location":"API_DEVICE/","title":"Device API Endpoints","text":"

Manage a single device by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICE/#1-retrieve-device-details","title":"1. Retrieve Device Details","text":"

Special case: mac=new returns a template for a new device with default values.

Response (success):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\",\n  \"devName\": \"Net - Huawei\",\n  \"devOwner\": \"Admin\",\n  \"devType\": \"Router\",\n  \"devVendor\": \"Huawei\",\n  \"devStatus\": \"On-line\",\n  \"devSessions\": 12,\n  \"devEvents\": 5,\n  \"devDownAlerts\": 1,\n  \"devPresenceHours\": 32,\n  \"devChildrenDynamic\": [...],\n  \"devChildrenNicsDynamic\": [...],\n  ...\n}\n

Error Responses:

"},{"location":"API_DEVICE/#2-update-device-fields","title":"2. Update Device Fields","text":"

Request Body:

{\n  \"devName\": \"New Device\",\n  \"devOwner\": \"Admin\",\n  \"createNew\": true\n}\n

Behavior:

Response:

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#3-delete-a-device","title":"3. Delete a Device","text":"

Response:

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#4-delete-all-events-for-a-device","title":"4. Delete All Events for a Device","text":"

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#5-reset-device-properties","title":"5. Reset Device Properties","text":"

Request Body: Optional JSON for additional parameters.

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#6-copy-device-data","title":"6. Copy Device Data","text":"

Request Body:

{\n  \"macFrom\": \"AA:BB:CC:DD:EE:FF\",\n  \"macTo\": \"11:22:33:44:55:66\"\n}\n

Response:

{\n  \"success\": true,\n  \"message\": \"Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66\"\n}\n

Error Responses:

"},{"location":"API_DEVICE/#7-update-a-single-column","title":"7. Update a Single Column","text":"

Request Body:

{\n  \"columnName\": \"devName\",\n  \"columnValue\": \"Updated Device Name\"\n}\n

Response (success):

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#example-curl-requests","title":"Example curl Requests","text":"

Get Device Details:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Update Device Fields:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devName\": \"New Device Name\"}'\n

Delete Device:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Copy Device Data:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/copy\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macFrom\":\"AA:BB:CC:DD:EE:FF\",\"macTo\":\"11:22:33:44:55:66\"}'\n

Update Single Column:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/update-column\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"columnName\":\"devName\",\"columnValue\":\"Updated Device\"}'\n
"},{"location":"API_DEVICES/","title":"Devices Collection API Endpoints","text":"

The Devices Collection API provides operations to retrieve, manage, import/export, and filter devices in bulk. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICES/#endpoints","title":"Endpoints","text":""},{"location":"API_DEVICES/#1-get-all-devices","title":"1. Get All Devices","text":"

Response (success):

{\n  \"success\": true,\n  \"devices\": [\n    {\n      \"devName\": \"Net - Huawei\",\n      \"devMAC\": \"AA:BB:CC:DD:EE:FF\",\n      \"devIP\": \"192.168.1.1\",\n      \"devType\": \"Router\",\n      \"devFavorite\": 0,\n      \"devStatus\": \"online\"\n    },\n    ...\n  ]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#2-delete-devices-by-mac","title":"2. Delete Devices by MAC","text":"

Request Body:

{\n  \"macs\": [\"AA:BB:CC:DD:EE:FF\", \"11:22:33:*\"]\n}\n

Behavior:

Response:

{\n  \"success\": true,\n  \"deleted_count\": 5\n}\n

Error Responses:

"},{"location":"API_DEVICES/#3-delete-devices-with-empty-macs","title":"3. Delete Devices with Empty MACs","text":"

Response:

{\n  \"success\": true,\n  \"deleted\": 3\n}\n
"},{"location":"API_DEVICES/#4-delete-unknown-devices","title":"4. Delete Unknown Devices","text":"

Response:

{\n  \"success\": true,\n  \"deleted\": 2\n}\n
"},{"location":"API_DEVICES/#5-export-devices","title":"5. Export Devices","text":"

Query Parameter / URL Parameter:

CSV Response:

JSON Response:

{\n  \"data\": [\n    { \"devName\": \"Net - Huawei\", \"devMAC\": \"AA:BB:CC:DD:EE:FF\", ... },\n    ...\n  ],\n  \"columns\": [\"devName\", \"devMAC\", \"devIP\", \"devType\", \"devFavorite\", \"devStatus\"]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#6-import-devices-from-csv","title":"6. Import Devices from CSV","text":"

Request Body (multipart file or JSON with content field):

{\n  \"content\": \"<base64-encoded CSV content>\"\n}\n

Response:

{\n  \"success\": true,\n  \"inserted\": 25,\n  \"skipped_lines\": [3, 7]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#7-get-device-totals","title":"7. Get Device Totals","text":"

Response:

[ \n  120,    // Total devices\n  85,     // Connected\n  5,      // Favorites\n  10,     // New\n  8,      // Down\n  12      // Archived\n]\n

Order: [all, connected, favorites, new, down, archived]

"},{"location":"API_DEVICES/#8-get-devices-by-status","title":"8. Get Devices by Status","text":"

Query Parameter:

Response (success):

[\n  { \"id\": \"AA:BB:CC:DD:EE:FF\", \"title\": \"Net - Huawei\", \"favorite\": 0 },\n  { \"id\": \"11:22:33:44:55:66\", \"title\": \"\u2605 USG Firewall\", \"favorite\": 1 }\n]\n

If devFavorite=1, the title is prepended with a star \u2605.

"},{"location":"API_DEVICES/#example-curl-requests","title":"Example curl Requests","text":"

Get All Devices:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Devices by MAC:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macs\":[\"AA:BB:CC:DD:EE:FF\",\"11:22:33:*\"]}'\n

Export Devices CSV:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/export?format=csv\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Import Devices from CSV:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/devices/import\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -F \"file=@devices.csv\"\n

Get Devices by Status:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/by-status?status=online\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_EVENTS/","title":"Events API Endpoints","text":"

The Events API provides access to device event logs, allowing creation, retrieval, deletion, and summary of events over time.

"},{"location":"API_EVENTS/#endpoints","title":"Endpoints","text":""},{"location":"API_EVENTS/#1-create-event","title":"1. Create Event","text":"

Request Body (JSON):

{\n  \"ip\": \"192.168.1.10\",\n  \"event_type\": \"Device Down\",\n  \"additional_info\": \"Optional info about the event\",\n  \"pending_alert\": 1,\n  \"event_time\": \"2025-08-24T12:00:00Z\"\n}\n

Response (JSON):

{\n  \"success\": true,\n  \"message\": \"Event created for 00:11:22:33:44:55\"\n}\n
"},{"location":"API_EVENTS/#2-get-events","title":"2. Get Events","text":"
/events?mac=<mac>\n

Response:

{\n  \"success\": true,\n  \"events\": [\n    {\n      \"eve_MAC\": \"00:11:22:33:44:55\",\n      \"eve_IP\": \"192.168.1.10\",\n      \"eve_DateTime\": \"2025-08-24T12:00:00Z\",\n      \"eve_EventType\": \"Device Down\",\n      \"eve_AdditionalInfo\": \"\",\n      \"eve_PendingAlertEmail\": 1\n    }\n  ]\n}\n
"},{"location":"API_EVENTS/#3-delete-events","title":"3. Delete Events","text":"

Response:

{\n  \"success\": true,\n  \"message\": \"Deleted events older than <days> days\"\n}\n
"},{"location":"API_EVENTS/#4-event-totals-over-a-period","title":"4. Event Totals Over a Period","text":"

Query Parameters:

Parameter Description period Time period for totals, e.g., \"7 days\", \"1 month\", \"1 year\", \"100 years\"

Sample Response (JSON Array):

[120, 85, 5, 10, 3, 7]\n

Meaning of Values:

  1. Total events in the period
  2. Total sessions
  3. Missing sessions
  4. Voided events (eve_EventType LIKE 'VOIDED%')
  5. New device events (eve_EventType LIKE 'New Device')
  6. Device down events (eve_EventType LIKE 'Device Down')
"},{"location":"API_EVENTS/#notes","title":"Notes","text":"
{ \"error\": \"Forbidden\" }\n
"},{"location":"API_EVENTS/#example-curl-requests","title":"Example curl Requests","text":"

Create Event:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/events/create/00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\n    \"ip\": \"192.168.1.10\",\n    \"event_type\": \"Device Down\",\n    \"additional_info\": \"Power outage\",\n    \"pending_alert\": 1\n  }'\n

Get Events for a Device:

curl \"http://<server_ip>:<GRAPHQL_PORT>/events?mac=00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Events Older Than 30 Days:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/events/30\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Get Event Totals for 7 Days:

curl \"http://<server_ip>:<GRAPHQL_PORT>/sessions/totals?period=7 days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_GRAPHQL/","title":"GraphQL API Endpoint","text":"

GraphQL queries are read-optimized for speed. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:

"},{"location":"API_GRAPHQL/#endpoints","title":"Endpoints","text":""},{"location":"API_GRAPHQL/#devices-query","title":"Devices Query","text":""},{"location":"API_GRAPHQL/#sample-query","title":"Sample Query","text":"
query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#query-parameters","title":"Query Parameters","text":"Parameter Description page Page number of results to fetch. limit Number of results per page. sort Sorting options (field = field name, order = asc or desc). search Term to filter devices. status Filter devices by status: my_devices, connected, favorites, new, down, archived, offline. filters Additional filters (array of { filterColumn, filterValue })."},{"location":"API_GRAPHQL/#curl-example","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response","title":"Sample Response","text":"
{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 1\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#settings-query","title":"Settings Query","text":"

The settings query provides access to NetAlertX configuration stored in the settings table.

"},{"location":"API_GRAPHQL/#sample-query_1","title":"Sample Query","text":"
query GetSettings {\n  settings {\n    settings {\n      setKey\n      setName\n      setDescription\n      setType\n      setOptions\n      setGroup\n      setValue\n      setEvents\n      setOverriddenByEnv\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#schema-fields","title":"Schema Fields","text":"Field Type Description setKey String Unique key identifier for the setting. setName String Human-readable name. setDescription String Description or documentation of the setting. setType String Data type (string, int, bool, json, etc.). setOptions String Available options (for dropdown/select-type settings). setGroup String Group/category the setting belongs to. setValue String Current value of the setting. setEvents String Events or triggers related to this setting. setOverriddenByEnv Boolean Whether the setting is overridden by an environment variable at runtime."},{"location":"API_GRAPHQL/#curl-example_1","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }\"\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response_1","title":"Sample Response","text":"
{\n  \"data\": {\n    \"settings\": {\n      \"settings\": [\n        {\n          \"setKey\": \"UI_MY_DEVICES\",\n          \"setName\": \"My Devices Filter\",\n          \"setDescription\": \"Defines which statuses to include in the 'My Devices' view.\",\n          \"setType\": \"list\",\n          \"setOptions\": \"[\\\"online\\\",\\\"new\\\",\\\"down\\\",\\\"offline\\\",\\\"archived\\\"]\",\n          \"setGroup\": \"UI\",\n          \"setValue\": \"[\\\"online\\\",\\\"new\\\"]\",\n          \"setEvents\": null,\n          \"setOverriddenByEnv\": false\n        },\n        {\n          \"setKey\": \"NETWORK_DEVICE_TYPES\",\n          \"setName\": \"Network Device Types\",\n          \"setDescription\": \"Types of devices considered as network infrastructure.\",\n          \"setType\": \"list\",\n          \"setOptions\": \"[\\\"Router\\\",\\\"Switch\\\",\\\"AP\\\"]\",\n          \"setGroup\": \"Network\",\n          \"setValue\": \"[\\\"Router\\\",\\\"Switch\\\"]\",\n          \"setEvents\": null,\n          \"setOverriddenByEnv\": true\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#langstrings-query","title":"LangStrings Query","text":"

The LangStrings query provides access to localized strings. Supports filtering by langCode and langStringKey. If the requested string is missing or empty, you can optionally fallback to en_us.

"},{"location":"API_GRAPHQL/#sample-query_2","title":"Sample Query","text":"
query GetLangStrings {\n  langStrings(langCode: \"de_de\", langStringKey: \"settings_other_scanners\") {\n    langStrings {\n      langCode\n      langStringKey\n      langStringText\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#query-parameters_1","title":"Query Parameters","text":"Parameter Type Description langCode String Optional language code (e.g., en_us, de_de). If omitted, all languages are returned. langStringKey String Optional string key to retrieve a specific entry. fallback_to_en Boolean Optional (default true). If true, empty or missing strings fallback to en_us."},{"location":"API_GRAPHQL/#curl-example_2","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetLangStrings { langStrings(langCode: \\\"de_de\\\", langStringKey: \\\"settings_other_scanners\\\") { langStrings { langCode langStringKey langStringText } count } }\"\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response_2","title":"Sample Response","text":"
{\n  \"data\": {\n    \"langStrings\": {\n      \"count\": 1,\n      \"langStrings\": [\n        {\n          \"langCode\": \"de_de\",\n          \"langStringKey\": \"settings_other_scanners\",\n          \"langStringText\": \"Other, non-device scanner plugins that are currently enabled.\"  // falls back to en_us if empty\n        }\n      ]\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#notes","title":"Notes","text":""},{"location":"API_LOGS/","title":"Logs API Endpoints","text":"

Manage or purge application log files stored under /app/log and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.

Only specific, pre-approved log files can be purged for security and stability reasons.

"},{"location":"API_LOGS/#delete-purge-a-log-file","title":"Delete (Purge) a Log File","text":"

Query Parameter:

Allowed Files:

app.log\napp_front.log\nIP_changes.log\nstdout.log\nstderr.log\napp.php_errors.log\nexecution_queue.log\ndb_is_locked.log\n

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_LOGS/#curl-example-success","title":"curl Example (Success)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"message\": \"[clean_log] File app.log purged successfully\"\n}\n
"},{"location":"API_LOGS/#curl-example-not-allowed","title":"curl Example (Not Allowed)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=not_allowed.log' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": false,\n  \"message\": \"[clean_log] File not_allowed.log is not allowed to be purged\"\n}\n
"},{"location":"API_LOGS/#curl-example-unauthorized","title":"curl Example (Unauthorized)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_LOGS/#add-an-action-to-the-execution-queue","title":"Add an Action to the Execution Queue","text":"

Request Body (JSON):

{\n  \"action\": \"update_api|devices\"\n}\n

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_LOGS/#curl-example-success_1","title":"curl Example (Success)","text":"

The below will update the API cache for Devices

curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\"action\": \"update_api|devices\"}'\n

Response:

{\n  \"success\": true,\n  \"message\": \"[UserEventsQueueInstance] Action \\\"update_api|devices\\\" added to the execution queue.\"\n}\n
"},{"location":"API_LOGS/#curl-example-missing-parameter","title":"curl Example (Missing Parameter)","text":"
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{}'\n

Response:

{\n  \"success\": false,\n  \"message\": \"Missing parameters\",\n  \"error\": \"Missing required 'action' field in JSON body\"\n}\n
"},{"location":"API_LOGS/#curl-example-unauthorized_1","title":"curl Example (Unauthorized)","text":"
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\"action\": \"update_api|devices\"}'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_LOGS/#notes","title":"Notes","text":""},{"location":"API_MESSAGING_IN_APP/","title":"In-app Notifications API","text":"

Manage in-app notifications for users. Notifications can be written, retrieved, marked as read, or deleted.

"},{"location":"API_MESSAGING_IN_APP/#write-notification","title":"Write Notification","text":"

Request Body:

json { \"content\": \"This is a test notification\", \"level\": \"alert\" // optional, [\"interrupt\",\"info\",\"alert\"] default: \"alert\" }

Response:

json { \"success\": true }

"},{"location":"API_MESSAGING_IN_APP/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/write\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"content\": \"This is a test notification\",\n    \"level\": \"alert\"\n  }'\n
"},{"location":"API_MESSAGING_IN_APP/#get-unread-notifications","title":"Get Unread Notifications","text":"

Response:

json [ { \"timestamp\": \"2025-10-10T12:34:56\", \"guid\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\", \"read\": 0, \"level\": \"alert\", \"content\": \"This is a test notification\" } ]

"},{"location":"API_MESSAGING_IN_APP/#curl-example_1","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/unread\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#mark-all-notifications-as-read","title":"Mark All Notifications as Read","text":"

Response:

json { \"success\": true }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_2","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/all\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#mark-single-notification-as-read","title":"Mark Single Notification as Read","text":"

Response (success):

json { \"success\": true }

Response (failure):

json { \"success\": false, \"error\": \"Notification not found\" }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_3","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/f47ac10b-58cc-4372-a567-0e02b2c3d479\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#delete-all-notifications","title":"Delete All Notifications","text":"

Response:

json { \"success\": true }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_4","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#delete-single-notification","title":"Delete Single Notification","text":"

Response (success):

json { \"success\": true }

Response (failure):

json { \"success\": false, \"error\": \"Notification not found\" }

"},{"location":"API_MESSAGING_IN_APP/#curl-example_5","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete/f47ac10b-58cc-4372-a567-0e02b2c3d479\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_METRICS/","title":"Metrics API Endpoint","text":"

The /metrics endpoint exposes Prometheus-compatible metrics for NetAlertX, including aggregate device counts and per-device status.

"},{"location":"API_METRICS/#endpoint-details","title":"Endpoint Details","text":""},{"location":"API_METRICS/#example-output","title":"Example Output","text":"
netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_METRICS/#metrics-overview","title":"Metrics Overview","text":""},{"location":"API_METRICS/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"Metric Description netalertx_connected_devices Devices currently connected netalertx_offline_devices Devices currently offline netalertx_down_devices Down/unreachable devices netalertx_new_devices Recently detected devices netalertx_archived_devices Archived devices netalertx_favorite_devices User-marked favorites netalertx_my_devices Devices associated with the current user"},{"location":"API_METRICS/#2-per-device-status","title":"2. Per-Device Status","text":"

Metric: netalertx_device_status Each device has labels:

Metric value is always 1 (presence indicator).

"},{"location":"API_METRICS/#querying-with-curl","title":"Querying with curl","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace placeholders:

"},{"location":"API_METRICS/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_METRICS/#grafana-dashboard-template","title":"Grafana Dashboard Template","text":"

Sample template JSON: Download

"},{"location":"API_NETTOOLS/","title":"Net Tools API Endpoints","text":"

The Net Tools API provides network diagnostic utilities, including Wake-on-LAN, traceroute, speed testing, DNS resolution, nmap scanning, and internet connection information.

All endpoints require authorization via Bearer token.

"},{"location":"API_NETTOOLS/#endpoints","title":"Endpoints","text":""},{"location":"API_NETTOOLS/#1-wake-on-lan","title":"1. Wake-on-LAN","text":"

Request Body (JSON):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"message\": \"WOL packet sent\",\n  \"output\": \"Sent magic packet to AA:BB:CC:DD:EE:FF\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#2-traceroute","title":"2. Traceroute","text":"

Request Body:

{\n  \"devLastIP\": \"192.168.1.1\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": \"traceroute output as string\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#3-speedtest","title":"3. Speedtest","text":"

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Ping: 15 ms\",\n    \"Download: 120.5 Mbit/s\",\n    \"Upload: 22.4 Mbit/s\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#4-dns-lookup-nslookup","title":"4. DNS Lookup (nslookup)","text":"

Request Body:

{\n  \"devLastIP\": \"8.8.8.8\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Server: 8.8.8.8\",\n    \"Address: 8.8.8.8#53\",\n    \"Name: google-public-dns-a.google.com\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#5-nmap-scan","title":"5. Nmap Scan","text":"

Request Body:

{\n  \"scan\": \"192.168.1.0/24\",\n  \"mode\": \"fast\"\n}\n

Supported Modes:

Mode nmap Arguments fast -F normal default detail -A skipdiscovery -Pn

Response (success):

{\n  \"success\": true,\n  \"mode\": \"fast\",\n  \"ip\": \"192.168.1.0/24\",\n  \"output\": [\n    \"Starting Nmap 7.91\",\n    \"Host 192.168.1.1 is up\",\n    \"... scan results ...\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#6-internet-connection-info","title":"6. Internet Connection Info","text":"

Response (success):

{\n  \"success\": true,\n  \"output\": \"IP: 203.0.113.5 City: Sydney Country: AU Org: Example ISP\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#example-curl-requests","title":"Example curl Requests","text":"

Wake-on-LAN:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/wakeonlan\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devMac\":\"AA:BB:CC:DD:EE:FF\"}'\n

Traceroute:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/traceroute\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"192.168.1.1\"}'\n

Speedtest:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/speedtest\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Nslookup:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nslookup\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"8.8.8.8\"}'\n

Nmap Scan:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nmap\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"scan\":\"192.168.1.0/24\",\"mode\":\"fast\"}'\n

Internet Info:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/internetinfo\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_OLD/","title":"[Deprecated] API endpoints","text":"

Warning

Some of these endpoints will be deprecated soon. Please refere to the new API endpoints docs for details on the new API layer.

NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the API_TOKEN settings as authorization bearer, for example:

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#api-endpoint-graphql","title":"API Endpoint: GraphQL","text":""},{"location":"API_OLD/#example-query-to-fetch-devices","title":"Example Query to Fetch Devices","text":"

First, let's define the GraphQL query to fetch devices with pagination and sorting options.

query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n

See also: Debugging GraphQL issues

"},{"location":"API_OLD/#curl-command","title":"curl Command","text":"

You can use the following curl command to execute the query.

curl 'http://host:GRAPHQL_PORT/graphql'   -X POST   -H 'Authorization: Bearer API_TOKEN'  -H 'Content-Type: application/json'   --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#explanation","title":"Explanation:","text":"
  1. GraphQL Query:
  2. The query parameter contains the GraphQL query as a string.
  3. The variables parameter contains the input variables for the query.

  4. Query Variables:

  5. page: Specifies the page number of results to fetch.
  6. limit: Specifies the number of results per page.
  7. sort: Specifies the sorting options, with field being the field to sort by and order being the sort order (asc for ascending or desc for descending).
  8. search: A search term to filter the devices.
  9. status: The status filter to apply (valid values are my_devices (determined by the UI_MY_DEVICES setting), connected, favorites, new, down, archived, offline).

  10. curl Command:

  11. The -X POST option specifies that we are making a POST request.
  12. The -H \"Content-Type: application/json\" option sets the content type of the request to JSON.
  13. The -d option provides the request payload, which includes the GraphQL query and variables.
"},{"location":"API_OLD/#sample-response","title":"Sample Response","text":"

The response will be in JSON format, similar to the following:

{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        },\n        {\n          \"rowid\": 2,\n          \"devMac\": \"66:77:88:99:AA:BB\",\n          \"devName\": \"Device 2\",\n          \"devOwner\": \"Owner 2\",\n          \"devType\": \"Type 2\",\n          \"devVendor\": \"Vendor 2\",\n          \"devLastConnection\": \"2025-01-02T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_OLD/#api-endpoint-json-files","title":"API Endpoint: JSON files","text":"

This API endpoint retrieves static files, that are periodically updated.

"},{"location":"API_OLD/#when-are-the-endpoints-updated","title":"When are the endpoints updated","text":"

The endpoints are updated when objects in the API endpoints are changed.

"},{"location":"API_OLD/#location-of-the-endpoints","title":"Location of the endpoints","text":"

In the container, these files are located under the API directory (default: /tmp/api/, configurable via NETALERTX_API environment variable). You can access them via the /php/server/query_json.php?file=user_notifications.json endpoint.

"},{"location":"API_OLD/#available-endpoints","title":"Available endpoints","text":"

You can access the following files:

File name Description notification_json_final.json The json version of the last notification (e.g. used for webhooks - sample JSON). table_devices.json All of the available Devices detected by the app. table_plugins_events.json The list of the unprocessed (pending) notification events (plugins_events DB table). table_plugins_history.json The list of notification events history. table_plugins_objects.json The content of the plugins_objects table. Find more info on the Plugin system here language_strings.json The content of the language_strings table, which in turn is loaded from the plugins config.json definitions. table_custom_endpoint.json A custom endpoint generated by the SQL query specified by the API_CUSTOM_SQL setting. table_settings.json The content of the settings table. app_state.json Contains the current application state."},{"location":"API_OLD/#json-data-format","title":"JSON Data format","text":"

The endpoints starting with the table_ prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:

{\n  \"data\": [\n        {\n          \"db_column_name\": \"data\",\n          \"db_column_name2\": \"data2\"      \n        }, \n        {\n          \"db_column_name\": \"data3\",\n          \"db_column_name2\": \"data4\" \n        }\n    ]\n}\n\n

Example JSON of the table_devices.json endpoint with two Devices (database rows):

{\n  \"data\": [\n        {\n          \"devMac\": \"Internet\",\n          \"devName\": \"Net - Huawei\",\n          \"devType\": \"Router\",\n          \"devVendor\": null,\n          \"devGroup\": \"Always on\",\n          \"devFirstConnection\": \"2021-01-01 00:00:00\",\n          \"devLastConnection\": \"2021-01-28 22:22:11\",\n          \"devLastIP\": \"192.168.1.24\",\n          \"devStaticIP\": 0,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2023-01-28 22:22:28.998715\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"\",\n          \"devParentPort\": \"\",\n          \"devIcon\": \"globe\"\n        }, \n        {\n          \"devMac\": \"a4:8f:ff:aa:ba:1f\",\n          \"devName\": \"Net - USG\",\n          \"devType\": \"Firewall\",\n          \"devVendor\": \"Ubiquiti Inc\",\n          \"devGroup\": \"\",\n          \"devFirstConnection\": \"2021-02-12 22:05:00\",\n          \"devLastConnection\": \"2021-07-17 15:40:00\",\n          \"devLastIP\": \"192.168.1.1\",\n          \"devStaticIP\": 1,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2021-07-17 15:40:10.667717\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"Internet\",\n          \"devParentPort\": 1,\n          \"devIcon\": \"shield-halved\"\n      }\n    ]\n}\n\n
"},{"location":"API_OLD/#api-endpoint-prometheus-exporter","title":"API Endpoint: Prometheus Exporter","text":""},{"location":"API_OLD/#example-output-of-the-metrics-endpoint","title":"Example Output of the /metrics Endpoint","text":"

Below is a representative snippet of the metrics you may find when querying the /metrics endpoint for netalertx. It includes both aggregate counters and device_status labels per device.

netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_OLD/#metrics-explanation","title":"Metrics Explanation","text":""},{"location":"API_OLD/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"

Metric names prefixed with netalertx_ provide aggregated counts by device status:

These numeric values give a high-level overview of device distribution.

"},{"location":"API_OLD/#2-perdevice-status-with-labels","title":"2. Per\u2011Device Status with Labels","text":"

Each individual device is represented by a netalertx_device_status metric, with descriptive labels:

The metric value is always 1 (indicating presence or active state) and the combination of labels identifies the device.

"},{"location":"API_OLD/#how-to-query-with-curl","title":"How to Query with curl","text":"

To fetch the metrics from the NetAlertX exporter:

curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace:

"},{"location":"API_OLD/#summary","title":"Summary","text":""},{"location":"API_OLD/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_OLD/#grafana-template","title":"Grafana template","text":"

Grafana template sample: Download json

"},{"location":"API_OLD/#api-endpoint-log-files","title":"API Endpoint: /log files","text":"

This API endpoint retrieves files from the /tmp/log folder.

File Description IP_changes.log Logs of IP address changes app.log Main application log app.php_errors.log PHP error log app_front.log Frontend application log app_nmap.log Logs of Nmap scan results db_is_locked.log Logs when the database is locked execution_queue.log Logs of execution queue activities plugins/ Directory for temporary plugin-related files (not accessible) report_output.html HTML report output report_output.json JSON format report output report_output.txt Text format report output stderr.log Logs of standard error output stdout.log Logs of standard output"},{"location":"API_OLD/#api-endpoint-config-files","title":"API Endpoint: /config files","text":"

To retrieve files from the /data/config folder.

File Description devices.csv Devices csv file app.conf Application config file"},{"location":"API_ONLINEHISTORY/","title":"Online History API Endpoints","text":"

Manage the online history records of devices. Currently, the API supports deletion of all history entries. All endpoints require authorization.

"},{"location":"API_ONLINEHISTORY/#1-delete-online-history","title":"1. Delete Online History","text":"

Response (success):

{\n  \"success\": true,\n  \"message\": \"Deleted online history\"\n}\n

Error Responses:

"},{"location":"API_ONLINEHISTORY/#example-curl-request","title":"Example curl Request","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/history\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_SESSIONS/","title":"Sessions API Endpoints","text":"

Track and manage device connection sessions. Sessions record when a device connects or disconnects on the network.

"},{"location":"API_SESSIONS/#create-a-session","title":"Create a Session","text":"

Request Body:

json { \"mac\": \"AA:BB:CC:DD:EE:FF\", \"ip\": \"192.168.1.10\", \"start_time\": \"2025-08-01T10:00:00\", \"end_time\": \"2025-08-01T12:00:00\", // optional \"event_type_conn\": \"Connected\", // optional, default \"Connected\" \"event_type_disc\": \"Disconnected\" // optional, default \"Disconnected\" }

Response:

json { \"success\": true, \"message\": \"Session created for MAC AA:BB:CC:DD:EE:FF\" }

"},{"location":"API_SESSIONS/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/sessions/create\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"mac\": \"AA:BB:CC:DD:EE:FF\",\n    \"ip\": \"192.168.1.10\",\n    \"start_time\": \"2025-08-01T10:00:00\",\n    \"end_time\": \"2025-08-01T12:00:00\",\n    \"event_type_conn\": \"Connected\",\n    \"event_type_disc\": \"Disconnected\"\n  }'\n\n
"},{"location":"API_SESSIONS/#delete-sessions","title":"Delete Sessions","text":"

Request Body:

json { \"mac\": \"AA:BB:CC:DD:EE:FF\" }

Response:

json { \"success\": true, \"message\": \"Deleted sessions for MAC AA:BB:CC:DD:EE:FF\" }

"},{"location":"API_SESSIONS/#curl-example_1","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/sessions/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"mac\": \"AA:BB:CC:DD:EE:FF\"\n  }'\n
"},{"location":"API_SESSIONS/#list-sessions","title":"List Sessions","text":"

Query Parameters:

Example:

/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21

Response:

json { \"success\": true, \"sessions\": [ { \"ses_MAC\": \"AA:BB:CC:DD:EE:FF\", \"ses_Connection\": \"2025-08-01 10:00\", \"ses_Disconnection\": \"2025-08-01 12:00\", \"ses_Duration\": \"2h 0m\", \"ses_IP\": \"192.168.1.10\", \"ses_Info\": \"\" } ] }

"},{"location":"API_SESSIONS/#curl-example_2","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#calendar-view-of-sessions","title":"Calendar View of Sessions","text":"

Query Parameters:

Example:

/sessions/calendar?start=2025-08-01&end=2025-08-21

Response:

json { \"success\": true, \"sessions\": [ { \"resourceId\": \"AA:BB:CC:DD:EE:FF\", \"title\": \"\", \"start\": \"2025-08-01T10:00:00\", \"end\": \"2025-08-01T12:00:00\", \"color\": \"#00a659\", \"tooltip\": \"Connection: 2025-08-01 10:00\\nDisconnection: 2025-08-01 12:00\\nIP: 192.168.1.10\", \"className\": \"no-border\" } ] }

"},{"location":"API_SESSIONS/#curl-example_3","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/calendar?start=2025-08-01&end=2025-08-21\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#device-sessions","title":"Device Sessions","text":"

Query Parameters:

Example:

/sessions/AA:BB:CC:DD:EE:FF?period=7 days

Response:

json { \"success\": true, \"sessions\": [ { \"ses_MAC\": \"AA:BB:CC:DD:EE:FF\", \"ses_Connection\": \"2025-08-01 10:00\", \"ses_Disconnection\": \"2025-08-01 12:00\", \"ses_Duration\": \"2h 0m\", \"ses_IP\": \"192.168.1.10\", \"ses_Info\": \"\" } ] }

"},{"location":"API_SESSIONS/#curl-example_4","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/AA:BB:CC:DD:EE:FF?period=7%20days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#session-events-summary","title":"Session Events Summary","text":"

Query Parameters:

Example:

/sessions/session-events?type=all&period=7 days

Response: Returns a list of events or sessions with formatted connection, disconnection, duration, and IP information.

"},{"location":"API_SESSIONS/#curl-example_5","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/session-events?type=all&period=7%20days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SETTINGS/","title":"Settings API Endpoints","text":"

Retrieve application settings stored in the configuration system. This endpoint is useful for quickly fetching individual settings such as API_TOKEN or TIMEZONE.

For bulk or structured access (all settings, schema details, or filtering), use the GraphQL API Endpoint.

"},{"location":"API_SETTINGS/#get-a-setting","title":"Get a Setting","text":"

Path Parameter:

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_SETTINGS/#curl-example-success","title":"curl Example (Success)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"value\": \"my-secret-token\"\n}\n
"},{"location":"API_SETTINGS/#curl-example-invalid-key","title":"curl Example (Invalid Key)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/DOES_NOT_EXIST' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"value\": null\n}\n
"},{"location":"API_SETTINGS/#curl-example-unauthorized","title":"curl Example (Unauthorized)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_SETTINGS/#notes","title":"Notes","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }\"\n  }'\n

See the GraphQL API Endpoint for more details.

"},{"location":"API_SYNC/","title":"Sync API Endpoint","text":"

The /sync endpoint is used by the SYNC plugin to synchronize data between multiple NetAlertX instances (e.g., from a node to a hub). It supports both GET and POST requests.

"},{"location":"API_SYNC/#91-get-sync","title":"9.1 GET /sync","text":"

Fetches data from a node to the hub. The data is returned as a base64-encoded JSON file.

Example Request:

curl 'http://<server>:<GRAPHQL_PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>'\n

Response Example:

{\n  \"node_name\": \"NODE-01\",\n  \"status\": 200,\n  \"message\": \"OK\",\n  \"data_base64\": \"eyJkZXZpY2VzIjogW3siZGV2TWFjIjogIjAwOjExOjIyOjMzOjQ0OjU1IiwiZGV2TmFtZSI6ICJEZXZpY2UgMSJ9XSwgImNvdW50Ijog1fQ==\",\n  \"timestamp\": \"2025-08-24T10:15:00+10:00\"\n}\n

Notes:

"},{"location":"API_SYNC/#92-post-sync","title":"9.2 POST /sync","text":"

The POST endpoint is used by nodes to send data to the hub. The hub expects the data as form-encoded fields (application/x-www-form-urlencoded or multipart/form-data). The hub then stores the data in the plugin log folder for processing.

"},{"location":"API_SYNC/#required-fields","title":"Required Fields","text":"Field Type Description data string The payload from the plugin or devices. Typically plain text, JSON, or encrypted Base64 data. In your Python script, encrypt_data() is applied before sending. node_name string The name of the node sending the data. Matches the node\u2019s SYNC_node_name setting. Used to generate the filename on the hub. plugin string The name of the plugin sending the data. Determines the filename prefix (last_result.<plugin>...). file_path string (optional) Path of the local file being sent. Used only for logging/debugging purposes on the hub; not required for processing."},{"location":"API_SYNC/#how-the-hub-processes-the-post-data","title":"How the Hub Processes the POST Data","text":"
  1. Receives the data and validates the API token.
  2. Stores the raw payload in:
INSTALL_PATH/log/plugins/last_result.<plugin>.encoded.<node_name>.<sequence>.log\n
processed_last_result.<plugin>.<node_name>.<sequence>.log\n
"},{"location":"API_SYNC/#example-post-payload","title":"Example POST Payload","text":"

If a node is sending device data:

curl -X POST 'http://<hub>:<PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -F 'data={\"data\":[{\"devMac\":\"00:11:22:33:44:55\",\"devName\":\"Device 1\",\"devVendor\":\"Vendor A\",\"devLastIP\":\"192.168.1.10\"}]}' \\\n  -F 'node_name=NODE-01' \\\n  -F 'plugin=SYNC'\n
"},{"location":"API_SYNC/#key-notes","title":"Key Notes","text":"

Storage Details:

last_result.<plugin>.encoded.<node_name>.<sequence>.log\n
"},{"location":"API_SYNC/#93-notes-and-best-practices","title":"9.3 Notes and Best Practices","text":""},{"location":"API_TESTS/","title":"Tests","text":""},{"location":"API_TESTS/#unit-tests","title":"Unit Tests","text":"

Warning

Please note these test modify data in the database.

  1. See the /test directory for available test cases. These are not exhaustive but cover the main API endpoints.
  2. To run a test case, SSH into the container: sudo docker exec -it netalertx /bin/bash
  3. Inside the container, install pytest (if not already installed): pip install pytest
  4. Run a specific test case: pytest /app/test/TESTFILE.py
"},{"location":"AUTHELIA/","title":"Authelia","text":""},{"location":"AUTHELIA/#authelia-support","title":"Authelia support","text":"

Warning

This is community contributed content and work in progress. Contributions are welcome.

theme: dark\n\ndefault_2fa_method: \"totp\"\n\nserver:\n  address: 0.0.0.0:9091\n  endpoints:\n    enable_expvars: false\n    enable_pprof: false\n    authz:\n      forward-auth:\n        implementation: 'ForwardAuth'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      ext-authz:\n        implementation: 'ExtAuthz'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      auth-request:\n        implementation: 'AuthRequest'\n        authn_strategies:\n          - name: 'HeaderAuthRequestProxyAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      legacy:\n        implementation: 'Legacy'\n        authn_strategies:\n          - name: 'HeaderLegacy'\n          - name: 'CookieSession'\n  disable_healthcheck: false\n  tls:\n    key: \"\"\n    certificate: \"\"\n    client_certificates: []\n  headers:\n    csp_template: \"\"\n\nlog:\n  ## Level of verbosity for logs: info, debug, trace.\n  level: info\n\n###############################################################\n# The most important section\n###############################################################\naccess_control:\n  ## Default policy can either be 'bypass', 'one_factor', 'two_factor' or 'deny'.\n  default_policy: deny\n  networks:\n    - name: internal\n      networks:\n        - '192.168.0.0/18'\n        - '10.10.10.0/8' # Zerotier\n    - name: private\n      networks:\n        - '172.16.0.0/12'\n  rules:\n    - networks:\n        - private\n      domain:\n        - '*'\n      policy: bypass\n    - networks:\n        - internal\n      domain:\n        - '*'\n      policy: bypass\n    - domain:\n        # exclude itself from auth, should not happen as we use Traefik middleware on a case-by-case screnario\n        - 'auth.MYDOMAIN1.TLD'\n        - 'authelia.MYDOMAIN1.TLD'\n        - 'auth.MYDOMAIN2.TLD'\n        - 'authelia.MYDOMAIN2.TLD'\n      policy: bypass\n    - domain:\n        #All subdomains match\n        - 'MYDOMAIN1.TLD'\n        - '*.MYDOMAIN1.TLD'\n      policy: two_factor\n    - domain:\n        # This will not work yet as Authelio does not support multi-domain authentication\n        - 'MYDOMAIN2.TLD'\n        - '*.MYDOMAIN2.TLD'\n      policy: two_factor\n\n\n############################################################\nidentity_validation:\n  reset_password:\n    jwt_secret: \"[REDACTED]\"\n\nidentity_providers:\n  oidc:\n    enable_client_debug_messages: true\n    enforce_pkce: public_clients_only\n    hmac_secret: [REDACTED]\n    lifespans:\n      authorize_code: 1m\n      id_token: 1h\n      refresh_token: 90m\n      access_token: 1h\n    cors:\n      endpoints:\n        - authorization\n        - token\n        - revocation\n        - introspection\n        - userinfo\n      allowed_origins:\n        - \"*\"\n      allowed_origins_from_client_redirect_uris: false\n    jwks:\n      - key: [REDACTED]\n        certificate_chain:\n    clients:\n      - client_id: portainer\n        client_name: Portainer\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_post'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://portainer.MYDOMAIN1.LTD\n        userinfo_signed_response_alg: none\n\n      - client_id: openproject\n        client_name: OpenProject\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_basic'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://op.MYDOMAIN.TLD\n        #grant_types:\n        #  - refresh_token\n        #  - authorization_code\n        #response_types:\n        #  - code\n        #response_modes:\n        #  - form_post\n        #  - query\n        #  - fragment\n        userinfo_signed_response_alg: none\n##################################################################\n\n\ntelemetry:\n  metrics:\n    enabled: false\n    address: tcp://0.0.0.0:9959\n\ntotp:\n  disable: false\n  issuer: authelia.com\n  algorithm: sha1\n  digits: 6\n  period: 30 ## The period in seconds a one-time password is valid for.\n  skew: 1\n  secret_size: 32\n\nwebauthn:\n  disable: false\n  timeout: 60s ## Adjust the interaction timeout for Webauthn dialogues.\n  display_name: Authelia\n  attestation_conveyance_preference: indirect\n  user_verification: preferred\n\nntp:\n  address: \"pool.ntp.org\"\n  version: 4\n  max_desync: 5s\n  disable_startup_check: false\n  disable_failure: false\n\nauthentication_backend:\n  password_reset:\n    disable: false\n    custom_url: \"\"\n  refresh_interval: 5m\n  file:\n    path: /config/users_database.yml\n    watch: true\n    password:\n      algorithm: argon2\n      argon2:\n        variant: argon2id\n        iterations: 3\n        memory: 65536\n        parallelism: 4\n        key_length: 32\n        salt_length: 16\n\npassword_policy:\n  standard:\n    enabled: false\n    min_length: 8\n    max_length: 0\n    require_uppercase: true\n    require_lowercase: true\n    require_number: true\n    require_special: true\n  ## zxcvbn is a well known and used password strength algorithm. It does not have tunable settings.\n  zxcvbn:\n    enabled: false\n    min_score: 3\n\nregulation:\n  max_retries: 3\n  find_time: 2m\n  ban_time: 5m\n\nsession:\n  name: authelia_session\n  secret: [REDACTED]\n  expiration: 60m\n  inactivity: 15m\n  cookies:\n    - domain: 'MYDOMAIN1.LTD'\n      authelia_url: 'https://auth.MYDOMAIN1.LTD'\n      name: 'authelia_session'\n      default_redirection_url: 'https://MYDOMAIN1.LTD'\n    - domain: 'MYDOMAIN2.LTD'\n      authelia_url: 'https://auth.MYDOMAIN2.LTD'\n      name: 'authelia_session_other'\n      default_redirection_url: 'https://MYDOMAIN2.LTD'\n\nstorage:\n  encryption_key: [REDACTED]\n  local:\n    path: /config/db.sqlite3\n\nnotifier:\n  disable_startup_check: true\n  smtp:\n    address: MYOTHERDOMAIN.LTD:465\n    timeout: 5s\n    username: \"USER@DOMAIN\"\n    password: \"[REDACTED]\"\n    sender: \"Authelia <postmaster@MYOTHERDOMAIN.LTD>\"\n    identifier: NAME@MYOTHERDOMAIN.LTD\n    subject: \"[Authelia] {title}\"\n    startup_check_address: postmaster@MYOTHERDOMAIN.LTD\n\n
"},{"location":"BACKUPS/","title":"Backing Things Up","text":"

Note

To back up 99% of your configuration, back up at least the /data/config folder. Database definitions can change between releases, so the safest method is to restore backups using the same app version they were taken from, then upgrade incrementally.

"},{"location":"BACKUPS/#what-to-back-up","title":"What to Back Up","text":"

There are four key artifacts you can use to back up your NetAlertX configuration:

File Description Limitations /db/app.db The application database Might be in an uncommitted state or corrupted /config/app.conf Configuration file Can be overridden using the APP_CONF_OVERRIDE variable /config/devices.csv CSV file containing device data Does not include historical data /config/workflows.json JSON file containing your workflows N/A"},{"location":"BACKUPS/#where-the-data-lives","title":"Where the Data Lives","text":"

Understanding where your data is stored helps you plan your backup strategy.

"},{"location":"BACKUPS/#core-configuration","title":"Core Configuration","text":"

Stored in /data/config/app.conf. This includes settings for:

(See Settings System for details.)

"},{"location":"BACKUPS/#device-data","title":"Device Data","text":"

Stored in /data/config/devices_<timestamp>.csv or /data/config/devices.csv, created by the CSV Backup CSVBCKP Plugin. Contains:

"},{"location":"BACKUPS/#historical-data","title":"Historical Data","text":"

Stored in /data/db/app.db (see Database Overview). Contains:

"},{"location":"BACKUPS/#backup-strategies","title":"Backup Strategies","text":"

The safest approach is to back up both the /db and /config folders regularly. Tools like Kopia make this simple and efficient.

If you can only keep a few files, prioritize:

  1. The latest devices_<timestamp>.csv or devices.csv
  2. app.conf
  3. workflows.json

You can also download the app.conf and devices.csv files from the Maintenance section:

"},{"location":"BACKUPS/#scenario-1-full-backup-and-restore","title":"Scenario 1: Full Backup and Restore","text":"

Goal: Full recovery of your configuration and data.

"},{"location":"BACKUPS/#what-to-back-up_1","title":"\ud83d\udcbe What to Back Up","text":""},{"location":"BACKUPS/#how-to-restore","title":"\ud83d\udce5 How to Restore","text":"

Map these files into your container as described in the Setup documentation.

"},{"location":"BACKUPS/#scenario-2-corrupted-database","title":"Scenario 2: Corrupted Database","text":"

Goal: Recover configuration and device data when the database is lost or corrupted.

"},{"location":"BACKUPS/#what-to-back-up_2","title":"\ud83d\udcbe What to Back Up","text":""},{"location":"BACKUPS/#how-to-restore_1","title":"\ud83d\udce5 How to Restore","text":"
  1. Copy app.conf and workflows.json into /data/config/
  2. Rename and place devices_<timestamp>.csv \u2192 /data/config/devices.csv
  3. Restore via the Maintenance section under Devices \u2192 Bulk Editing

This recovers nearly all configuration, workflows, and device metadata.

"},{"location":"BACKUPS/#docker-based-backup-and-restore","title":"Docker-Based Backup and Restore","text":"

For users running NetAlertX via Docker, you can back up or restore directly from your host system \u2014 a convenient and scriptable option.

"},{"location":"BACKUPS/#full-backup-file-level","title":"Full Backup (File-Level)","text":"
  1. Stop the container:

bash docker stop netalertx

  1. Create a compressed archive of your configuration and database volumes:

bash docker run --rm -v local_path/config:/config -v local_path/db:/db alpine tar -cz /config /db > netalertx-backup.tar.gz

  1. Restart the container:

bash docker start netalertx

"},{"location":"BACKUPS/#restore-from-backup","title":"Restore from Backup","text":"
  1. Stop the container:

bash docker stop netalertx

  1. Restore from your backup file:

bash docker run --rm -i -v local_path/config:/config -v local_path/db:/db alpine tar -C / -xz < netalertx-backup.tar.gz

  1. Restart the container:

bash docker start netalertx

This approach uses a temporary, minimal alpine container to access Docker-managed volumes. The tar command creates or extracts an archive directly from your host\u2019s filesystem, making it fast, clean, and reliable for both automation and manual recovery.

"},{"location":"BACKUPS/#summary","title":"Summary","text":""},{"location":"BUILDS/","title":"NetAlertX Builds: Choose Your Path","text":"

NetAlertX provides different installation methods for different needs. This guide helps you choose the right path for security, experimentation, or development.

"},{"location":"BUILDS/#1-hardened-appliance-default-production","title":"1. Hardened Appliance (Default Production)","text":"

Note

Use this image if: You want to use NetAlertX securely.

"},{"location":"BUILDS/#who-is-this-for","title":"Who is this for?","text":"

All users who want a stable, secure, \"set-it-and-forget-it\" appliance.

"},{"location":"BUILDS/#methodology","title":"Methodology","text":""},{"location":"BUILDS/#source","title":"Source","text":"

Dockerfile (hardened target)

"},{"location":"BUILDS/#2-tinkerers-image-insecure-vm-style","title":"2. \"Tinkerer's\" Image (Insecure VM-Style)","text":"

Note

Use this image if: You want to experiment with NetAlertX.

"},{"location":"BUILDS/#who-is-this-for_1","title":"Who is this for?","text":"

Power users, developers, and \"tinkerers\" wanting a familiar \"VM-like\" experience.

"},{"location":"BUILDS/#methodology_1","title":"Methodology","text":""},{"location":"BUILDS/#source_1","title":"Source","text":"

Dockerfile.debian

"},{"location":"BUILDS/#3-contributors-devcontainer-project-developers","title":"3. Contributor's Devcontainer (Project Developers)","text":"

Note

Use this image if: You want to develop NetAlertX itself.

"},{"location":"BUILDS/#who-is-this-for_2","title":"Who is this for?","text":"

Project contributors who are actively writing and debugging code for NetAlertX.

"},{"location":"BUILDS/#methodology_2","title":"Methodology","text":""},{"location":"BUILDS/#source_2","title":"Source","text":"

Dockerfile (devcontainer target)

"},{"location":"BUILDS/#visualizing-the-trade-offs","title":"Visualizing the Trade-Offs","text":"

This chart compares the three builds across key attributes. A higher score means \"more of\" that attribute. Notice the clear trade-offs between security and development features.

"},{"location":"BUILDS/#build-process-origins","title":"Build Process & Origins","text":"

The final images originate from two different files and build paths. The main Dockerfile uses stages to create both the hardened and development container images.

"},{"location":"BUILDS/#official-build-path","title":"Official Build Path","text":"

Dockerfile -> builder (Stage 1) -> runner (Stage 2) -> hardened (Final Stage) (Production Image) + devcontainer (Final Stage) (Developer Image)

"},{"location":"BUILDS/#legacy-build-path","title":"Legacy Build Path","text":"

Dockerfile.debian -> \"Tinkerer's\" Image (Insecure VM-Style Image)

"},{"location":"COMMON_ISSUES/","title":"Common issues","text":""},{"location":"COMMON_ISSUES/#loading","title":"Loading...","text":"

Often if the application is misconfigured the Loading... dialog is continuously displayed. This is most likely caused by the backed failing to start. The Maintenance -> Logs section should give you more details on what's happening. If there is no exception, check the Portainer log, or start the container in the foreground (without the -d parameter) to observe any exceptions. It's advisable to enable trace or debug. Check the Debug tips on detailed instructions.

"},{"location":"COMMON_ISSUES/#incorrect-scan_subnets","title":"Incorrect SCAN_SUBNETS","text":"

One of the most common issues is not configuring SCAN_SUBNETS correctly. If this setting is misconfigured you will only see one or two devices in your devices list after a scan. Please read the subnets docs carefully to resolve this.

"},{"location":"COMMON_ISSUES/#duplicate-devices-and-notifications","title":"Duplicate devices and notifications","text":"

The app uses the MAC address as an unique identifier for devices. If a new MAC is detected a new device is added to the application and corresponding notifications are triggered. This means that if the MAC of an existing device changes, the device will be logged as a new device. You can usually prevent this from happening by changing the device configuration (in Android, iOS, or Windows) for your network. See the Random Macs guide for details.

"},{"location":"COMMON_ISSUES/#permissions","title":"Permissions","text":"

Make sure you File permissions are set correctly.

"},{"location":"COMMON_ISSUES/#container-restarts-crashes","title":"Container restarts / crashes","text":""},{"location":"COMMON_ISSUES/#unable-to-resolve-host","title":"unable to resolve host","text":""},{"location":"COMMON_ISSUES/#invalid-json","title":"Invalid JSON","text":"

Check the Invalid JSON errors debug help docs on how to proceed.

"},{"location":"COMMON_ISSUES/#sudo-execution-failing-eg-on-arpscan-on-a-raspberry-pi-4","title":"sudo execution failing (e.g.: on arpscan) on a Raspberry Pi 4","text":"

sudo: unexpected child termination condition: 0

Resolution based on this issue

wget ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.3-2_armhf.deb\nsudo dpkg -i libseccomp2_2.5.3-2_armhf.deb\n

The link above will probably break in time too. Go to https://packages.debian.org/sid/armhf/libseccomp2/download to find the new version number and put that in the url.

"},{"location":"COMMON_ISSUES/#only-router-and-own-device-show-up","title":"Only Router and own device show up","text":"

Make sure that the subnet and interface in SCAN_SUBNETS are correct. If your device/NAS has multiple ethernet ports, you probably need to change eth0 to something else.

"},{"location":"COMMON_ISSUES/#losing-my-settings-and-devices-after-an-update","title":"Losing my settings and devices after an update","text":"

If you lose your devices and/or settings after an update that means you don't have the /data/db and /data/config folders mapped to a permanent storage. That means every time you update these folders are re-created. Make sure you have the volumes specified correctly in your docker-compose.yml or run command.

"},{"location":"COMMON_ISSUES/#the-application-is-slow","title":"The application is slow","text":"

Slowness is usually caused by incorrect settings (the app might restart, so check the app.log), too many background processes (disable unnecessary scanners), too long scans (limit the number of scanned devices), too many disk operations, or some maintenance plugins might have failed. See the Performance tips docs for details.

"},{"location":"COMMUNITY_GUIDES/","title":"Community Guides","text":"

Use the official installation guides at first and use community content as supplementary material. Open an issue or PR if you'd like to add your link to the list \ud83d\ude4f (Ordered by last update time)

"},{"location":"CUSTOM_PROPERTIES/","title":"Custom Properties for Devices","text":""},{"location":"CUSTOM_PROPERTIES/#overview","title":"Overview","text":"

This functionality allows you to define custom properties for devices, which can store and display additional information on the device listing page. By marking properties as \"Show\", you can enhance the user interface with quick actions, notes, or external links.

"},{"location":"CUSTOM_PROPERTIES/#key-features","title":"Key Features:","text":""},{"location":"CUSTOM_PROPERTIES/#defining-custom-properties","title":"Defining Custom Properties","text":"

Custom properties are structured as a list of objects, where each property includes the following fields:

Field Description CUSTPROP_icon The icon (Base64-encoded HTML) displayed for the property. CUSTPROP_type The action type (e.g., show_notes, link, delete_dev). CUSTPROP_name A short name or title for the property. CUSTPROP_args Arguments for the action (e.g., URL or modal text). CUSTPROP_notes Additional notes or details displayed when applicable. CUSTPROP_show A boolean to control visibility (true to show on the listing page)."},{"location":"CUSTOM_PROPERTIES/#available-action-types","title":"Available Action Types","text":""},{"location":"CUSTOM_PROPERTIES/#usage-on-the-device-listing-page","title":"Usage on the Device Listing Page","text":"

Visible properties (CUSTPROP_show: true) are displayed as interactive icons in the device listing. Each icon can perform one of the following actions based on the CUSTPROP_type:

  1. Modals (e.g., Show Notes):
  2. Displays detailed information in a popup modal.
  3. Example: Firmware version details.

  4. Links:

  5. Redirect to an external or internal URL.
  6. Example: Open a device's documentation or external site.

  7. Device Actions:

  8. Manage devices with actions like delete.
  9. Example: Quickly remove a device from the network.

  10. Plugins:

  11. Future placeholder for running custom plugin scripts.
  12. Note: Not implemented yet.
"},{"location":"CUSTOM_PROPERTIES/#example-use-cases","title":"Example Use Cases","text":"
  1. Device Documentation Link:
  2. Add a custom property with CUSTPROP_type set to link or link_new_tab to allow quick navigation to the external documentation of the device.

  3. Firmware Details:

  4. Use CUSTPROP_type: show_notes to display firmware versions or upgrade instructions in a modal.

  5. Device Removal:

  6. Enable device removal functionality using CUSTPROP_type: delete_dev.
"},{"location":"CUSTOM_PROPERTIES/#notes","title":"Notes","text":"

This feature provides a flexible way to enhance device management and display with interactive elements tailored to your needs.

"},{"location":"DATABASE/","title":"A high-level description of the database structure","text":"

An overview of the most important database tables as well as an detailed overview of the Devices table. The MAC address is used as a foreign key in most cases.

"},{"location":"DATABASE/#devices-database-table","title":"Devices database table","text":"Field Name Description Sample Value devMac MAC address of the device. 00:1A:2B:3C:4D:5E devName Name of the device. iPhone 12 devOwner Owner of the device. John Doe devType Type of the device (e.g., phone, laptop, etc.). If set to a network type (e.g., switch), it will become selectable as a Network Parent Node. Laptop devVendor Vendor/manufacturer of the device. Apple devFavorite Whether the device is marked as a favorite. 1 devGroup Group the device belongs to. Home Devices devComments User comments or notes about the device. Used for work purposes devFirstConnection Timestamp of the device's first connection. 2025-03-22 12:07:26+11:00 devLastConnection Timestamp of the device's last connection. 2025-03-22 12:07:26+11:00 devLastIP Last known IP address of the device. 192.168.1.5 devStaticIP Whether the device has a static IP address. 0 devScan Whether the device should be scanned. 1 devLogEvents Whether events related to the device should be logged. 0 devAlertEvents Whether alerts should be generated for events. 1 devAlertDown Whether an alert should be sent when the device goes down. 0 devSkipRepeated Whether to skip repeated alerts for this device. 1 devLastNotification Timestamp of the last notification sent for this device. 2025-03-22 12:07:26+11:00 devPresentLastScan Whether the device was present during the last scan. 1 devIsNew Whether the device is marked as new. 0 devLocation Physical or logical location of the device. Living Room devIsArchived Whether the device is archived. 0 devParentMAC MAC address of the parent device (if applicable) to build the Network Tree. 00:1A:2B:3C:4D:5F devParentPort Port of the parent device to which this device is connected. Port 3 devIcon Icon representing the device. The value is a base64-encoded SVG or Font Awesome HTML tag. PHN2ZyB... devGUID Unique identifier for the device. a2f4b5d6-7a8c-9d10-11e1-f12345678901 devSite Site or location where the device is registered. Office devSSID SSID of the Wi-Fi network the device is connected to. HomeNetwork devSyncHubNode The NetAlertX node ID used for synchronization between NetAlertX instances. node_1 devSourcePlugin Source plugin that discovered the device. ARPSCAN devCustomProps Custom properties related to the device. The value is a base64-encoded JSON object. PHN2ZyB... devFQDN Fully qualified domain name. raspberrypi.local devParentRelType The type of relationship between the current device and it's parent node. By default, selecting nic will hide it from lists. nic devReqNicsOnline If all NICs are required to be online to mark teh current device online. 0

To understand how values of these fields influuence application behavior, such as Notifications or Network topology, see also:

"},{"location":"DATABASE/#other-tables-overview","title":"Other Tables overview","text":"Table name Description Sample data CurrentScan Result of the current scan Devices The main devices database that also contains the Network tree mappings. If ScanCycle is set to 0 device is not scanned. Events Used to collect connection/disconnection events. Online_History Used to display the Device presence chart Parameters Used to pass values between the frontend and backend. Plugins_Events For capturing events exposed by a plugin via the last_result.log file. If unique then saved into the Plugins_Objects table. Entries are deleted once processed and stored in the Plugins_History and/or Plugins_Objects tables. Plugins_History History of all entries from the Plugins_Events table Plugins_Language_Strings Language strings collected from the plugin config.json files used for string resolution in the frontend. Plugins_Objects Unique objects detected by individual plugins. Sessions Used to display sessions in the charts Settings Database representation of the sum of all settings from app.conf and plugins coming from config.json files."},{"location":"DEBUG_GRAPHQL/","title":"Debugging GraphQL server issues","text":"

The GraphQL server is an API middle layer, running on it's own port specified by GRAPHQL_PORT, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the API documentation for details.

The most common issue is that the GraphQL server doesn't start properly, usually due to a port conflict. If you are running multiple NetAlertX instances, make sure to use unique ports by changing the GRAPHQL_PORT setting. The default is 20212.

"},{"location":"DEBUG_GRAPHQL/#how-to-update-the-graphql_port-in-case-of-issues","title":"How to update the GRAPHQL_PORT in case of issues","text":"

As a first troubleshooting step try changing the default GRAPHQL_PORT setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.

"},{"location":"DEBUG_GRAPHQL/#updating-the-setting-via-the-settings-ui","title":"Updating the setting via the Settings UI","text":"

Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:

You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The API_TOKEN is used to authenticate any API calls, including GraphQL requests.

"},{"location":"DEBUG_GRAPHQL/#updating-the-appconf-file","title":"Updating the app.conf file","text":"

If the UI is not accessible, you can directly edit the app.conf file in your /config folder:

"},{"location":"DEBUG_GRAPHQL/#using-a-docker-variable","title":"Using a docker variable","text":"

All application settings can also be initialized via the APP_CONF_OVERRIDE docker env variable.

...\n environment:\n      - TZ=Europe/Berlin      \n      - PORT=20213\n      - APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"}\n...\n
"},{"location":"DEBUG_GRAPHQL/#how-to-check-the-graphql-server-is-running","title":"How to check the GraphQL server is running?","text":"

There are several ways to check if the GraphQL server is running.

"},{"location":"DEBUG_GRAPHQL/#init-check","title":"Init Check","text":"

You can navigate to Maintenance -> Init Check to see if isGraphQLServerRunning is ticked:

"},{"location":"DEBUG_GRAPHQL/#checking-the-logs","title":"Checking the Logs","text":"

You can navigate to Maintenance -> Logs and search for graphql to see if it started correctly and serving requests:

"},{"location":"DEBUG_GRAPHQL/#inspecting-the-browser-console","title":"Inspecting the Browser console","text":"

In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).

You can then inspect any of the POST requests by opening them in a new tab.

"},{"location":"DEBUG_INVALID_JSON/","title":"How to debug the Invalid JSON response error","text":"

Check the the HTTP response of the failing backend call by following these steps:

For reference, the above queries should return results in the following format:

"},{"location":"DEBUG_INVALID_JSON/#first-url","title":"First URL:","text":""},{"location":"DEBUG_INVALID_JSON/#second-url","title":"Second URL:","text":""},{"location":"DEBUG_INVALID_JSON/#third-url","title":"Third URL:","text":"

You can copy and paste any JSON result (result of the First and Third query) into an online JSON checker, such as this one to check if it's valid.

"},{"location":"DEBUG_PHP/","title":"Debugging backend PHP issues","text":""},{"location":"DEBUG_PHP/#logs-in-ui","title":"Logs in UI","text":"

You can view recent backend PHP errors directly in the Maintenance > Logs section of the UI. This provides quick access to logs without needing terminal access.

"},{"location":"DEBUG_PHP/#accessing-logs-directly","title":"Accessing logs directly","text":"

Sometimes, the UI might not be accessible. In that case, you can access the logs directly inside the container.

"},{"location":"DEBUG_PHP/#step-by-step","title":"Step-by-step:","text":"
  1. Open a shell into the container:

bash docker exec -it netalertx /bin/sh

  1. Check the NGINX error log:

bash cat /var/log/nginx/error.log

  1. Check the PHP application error log:

bash cat /tmp/log/app.php_errors.log

These logs will help identify syntax issues, fatal errors, or startup problems when the UI fails to load properly.

"},{"location":"DEBUG_PLUGINS/","title":"Troubleshooting plugins","text":""},{"location":"DEBUG_PLUGINS/#high-level-overview","title":"High-level overview","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/).

For a more in-depth overview on how plugins work check the Plugins development docs.

"},{"location":"DEBUG_PLUGINS/#prerequisites","title":"Prerequisites","text":""},{"location":"DEBUG_PLUGINS/#potential-issues","title":"Potential issues","text":""},{"location":"DEBUG_PLUGINS/#incorrect-input-data","title":"Incorrect input data","text":"

Input data from the plugin might cause mapping issues in specific edge cases. Look for a corresponding section in the app.log file, for example notice the first line of the execution run of the PIHOLE plugin below:

17:31:05 [Scheduler] - Scheduler run for PIHOLE: YES\n17:31:05 [Plugin utils] ---------------------------------------------\n17:31:05 [Plugin utils] display_name: PiHole (Device sync)\n17:31:05 [Plugins] CMD: SELECT n.hwaddr AS Object_PrimaryID, {s-quote}null{s-quote} AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, {s-quote}null{s-quote} AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE {s-quote}ip-%{s-quote} AND n.hwaddr is not {s-quote}00:00:00:00:00:00{s-quote}  AND na.ip is not null\n17:31:05 [Plugins] setTyp: subnets\n17:31:05 [Plugin utils] Flattening the below array\n17:31:05 ['192.168.1.0/24 --interface=eth1']\n17:31:05 [Plugin utils] isinstance(arr, list) : False | isinstance(arr, str) : True\n17:31:05 [Plugins] Resolved value: 192.168.1.0/24 --interface=eth1\n17:31:05 [Plugins] Convert to Base64: True\n17:31:05 [Plugins] base64 value: b'MTkyLjE2OC4xLjAvMjQgLS1pbnRlcmZhY2U9ZXRoMQ=='\n17:31:05 [Plugins] Timeout: 10\n17:31:05 [Plugins] Executing: SELECT n.hwaddr AS Object_PrimaryID, 'null' AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, 'null' AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE 'ip-%' AND n.hwaddr is not '00:00:00:00:00:00'  AND na.ip is not null\n\ud83d\udd3b\n17:31:05 [Plugins] SUCCESS, received 2 entries\n17:31:05 [Plugins] sqlParam entries: [(0, 'PIHOLE', '01:01:01:01:01:01', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'not-processed', 'null', 'null', '01:01:01:01:01:01'), (0, 'PIHOLE', '02:42:ac:1e:00:02', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'not-processed', 'null', 'null', '02:42:ac:1e:00:02')]\n17:31:05 [Plugins] Processing        : PIHOLE\n17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status \"watched-not-changed\" \n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"missing-in-last-scan\" \n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"watched-not-changed\" \n17:31:05 [Plugins] Mapping objects to database table: CurrentScan\n17:31:05 [Plugins] SQL query for mapping: INSERT into CurrentScan ( \"cur_MAC\", \"cur_IP\", \"cur_LastQuery\", \"cur_Name\", \"cur_Vendor\", \"cur_ScanMethod\") VALUES ( ?, ?, ?, ?, ?, ?)\n17:31:05 [Plugins] SQL sqlParams for mapping: [('01:01:01:01:01:01', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'PIHOLE'), ('02:42:ac:1e:00:02', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'PIHOLE')]\n\ud83d\udd3a\n17:31:05 [API] Update API starting\n17:31:06 [API] Updating table_plugins_history.json file in /api\n

The debug output between the \ud83d\udd3bred arrows\ud83d\udd3a is important for debugging (arrows added only to highlight the section on this page, they are not available in the actual debug log)

In the above output notice the section logging how many events are produced by the plugin:

17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n

These values, if formatted correctly, will also show up in the UI:

"},{"location":"DEBUG_PLUGINS/#sharing-application-state","title":"Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/","title":"Debugging and troubleshooting","text":"

Please follow tips 1 - 4 to get a more detailed error.

"},{"location":"DEBUG_TIPS/#1-more-logging","title":"1. More Logging","text":"

When debugging an issue always set the highest log level:

LOG_LEVEL='trace'

"},{"location":"DEBUG_TIPS/#2-surfacing-errors-when-container-restarts","title":"2. Surfacing errors when container restarts","text":"

Start the container via the terminal with a command similar to this one:

docker run --rm --network=host \\\n  -v local/path/netalertx/config:/data/config \\\n  -v local/path/netalertx/db:/data/db \\\n  -e TZ=Europe/Berlin \\\n  -e PORT=20211 \\\n  ghcr.io/jokob-sk/netalertx:latest\n\n

\u26a0 Please note, don't use the -d parameter so you see the error when the container crashes. Use this error in your issue description.

"},{"location":"DEBUG_TIPS/#3-check-the-_dev-image-and-open-issues","title":"3. Check the _dev image and open issues","text":"

If possible, check if your issue got fixed in the _dev image before opening a new issue. The container is:

ghcr.io/jokob-sk/netalertx-dev:latest

\u26a0 Please backup your DB and config beforehand!

Please also search open issues.

"},{"location":"DEBUG_TIPS/#4-disable-restart-behavior","title":"4. Disable restart behavior","text":"

To prevent a Docker container from automatically restarting in a Docker Compose file, specify the restart policy as no:

version: '3'\n\nservices:\n  your-service:\n    image: your-image:tag\n    restart: no\n    # Other service configurations...\n
"},{"location":"DEBUG_TIPS/#5-sharing-application-state","title":"5. Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/#common-issues","title":"Common issues","text":"

See Common issues for details.

"},{"location":"DEVICES_BULK_EDITING/","title":"Editing multiple devices at once","text":"

NetAlertX allows you to mass-edit devices via a CSV export and import feature, or directly in the UI.

"},{"location":"DEVICES_BULK_EDITING/#ui-multi-edit","title":"UI multi edit","text":"

Note

Make sure you have your backups saved and restorable before doing any mass edits. Check Backup strategies.

You can select devices in the Devices view by selecting devices to edit and then clicking the Multi-edit button or via the Maintenance > Multi-Edit section.

"},{"location":"DEVICES_BULK_EDITING/#csv-bulk-edit","title":"CSV bulk edit","text":"

The database and device structure may change with new releases. When using the CSV import functionality, ensure the format matches what the application expects. To avoid issues, you can first export the devices and review the column formats before importing any custom data.

Note

As always, backup everything, just in case.

  1. In Maintenance > Backup / Restore click the CSV Export button.
  2. A devices.csv is generated in the /config folder
  3. Edit the devices.csv file however you like.

Note

The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: <your netalertx url>/php/server/devices.php?action=ExportCSV or via the CSV Backup plugin. (\ud83d\udca1 You can schedule this)

"},{"location":"DEVICES_BULK_EDITING/#file-encoding-format","title":"File encoding format","text":"

Note

Keep Linux line endings (suggested editors: Nano, Notepad++)

"},{"location":"DEVICE_DISPLAY_SETTINGS/","title":"Device Display Settings","text":"

This set of settings allows you to group Devices under different views. The Archived toggle allows you to exclude a Device from most listings and notifications.

"},{"location":"DEVICE_DISPLAY_SETTINGS/#status-colors","title":"Status Colors","text":"
  1. \ud83d\udd0c Online (Green) = A device that is no longer marked as a \"New Device\".
  2. \ud83d\udd0c New (Green) = A newly discovered device that is online and is still marked as a \"New Device\".
  3. \u2716 New (Grey) = Same as No.2 but device is now offline.
  4. \u2716 Offline (Grey) = A device that was not detected online in the last scan.
  5. \u26a0 Down (Red) = A device that has \"Alert Down\" marked and has been offline for the time set in the Setting NTFPRCS_alert_down_time.

See also Notification guide.

"},{"location":"DEVICE_HEURISTICS/","title":"Device Heuristics: Icon and Type Guessing","text":"

This module is responsible for inferring the most likely device type and icon based on minimal identifying data like MAC address, vendor, IP, or device name.

It does this using a set of heuristics defined in an external JSON rules file, which it evaluates in priority order.

Note

You can find the full source code of the heuristics module in the device_heuristics.py file.

"},{"location":"DEVICE_HEURISTICS/#json-rule-format","title":"JSON Rule Format","text":"

Rules are defined in a file called device_heuristics_rules.json (located under /back), structured like:

[\n  {\n    \"dev_type\": \"Phone\",\n    \"icon_html\": \"<i class=\\\"fa-brands fa-apple\\\"></i>\",\n    \"matching_pattern\": [\n      { \"mac_prefix\": \"001A79\", \"vendor\": \"Apple\" }\n    ],\n    \"name_pattern\": [\"iphone\", \"pixel\"]\n  }\n]\n

Note

Feel free to raise a PR in case you'd like to add any rules into the device_heuristics_rules.json file. Please place new rules into the correct position and consider the priority of already available rules.

"},{"location":"DEVICE_HEURISTICS/#supported-fields","title":"Supported fields:","text":"Field Type Description dev_type string Type to assign if rule matches (e.g. \"Gateway\", \"Phone\") icon_html string Icon (HTML string) to assign if rule matches. Encoded to base64 at load time. matching_pattern array List of { mac_prefix, vendor } objects for first strict and then loose matching name_pattern array (optional) List of lowercase substrings (used with regex) ip_pattern array (optional) Regex patterns to match IPs

Order in this array defines priority \u2014 rules are checked top-down and short-circuit on first match.

"},{"location":"DEVICE_HEURISTICS/#matching-flow-in-priority-order","title":"Matching Flow (in Priority Order)","text":"

The function guess_device_attributes(...) runs a series of matching functions in strict order:

  1. MAC + Vendor \u2192 match_mac_and_vendor()
  2. Vendor only \u2192 match_vendor()
  3. Name pattern \u2192 match_name()
  4. IP pattern \u2192 match_ip()
  5. Final fallback \u2192 defaults defined in the NEWDEV_devIcon and NEWDEV_devType settings.

Note

The app will try guessing the device type or icon if devType or devIcon are \"\" or \"null\".

"},{"location":"DEVICE_HEURISTICS/#use-of-default-values","title":"Use of default values","text":"

The guessing process runs for every device as long as the current type or icon still matches the default values. Even if earlier heuristics return a match, the system continues evaluating additional clues \u2014 like name or IP \u2014 to try and replace placeholders.

# Still considered a match attempt if current values are defaults\nif (not type_ or type_ == default_type) or (not icon or icon == default_icon):\n    type_, icon = match_ip(ip, default_type, default_icon)\n

In other words: if the type or icon is still \"unknown\" (or matches the default), the system assumes the match isn\u2019t final \u2014 and keeps looking. It stops only when both values are non-default (defaults are defined in the NEWDEV_devIcon and NEWDEV_devType settings).

"},{"location":"DEVICE_HEURISTICS/#match-behavior-per-function","title":"Match Behavior (per function)","text":"

These functions are executed in the following order:

"},{"location":"DEVICE_HEURISTICS/#match_mac_and_vendormac_clean-vendor","title":"match_mac_and_vendor(mac_clean, vendor, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_vendorvendor","title":"match_vendor(vendor, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_namename","title":"match_name(name, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_ipip","title":"match_ip(ip, ...)","text":""},{"location":"DEVICE_HEURISTICS/#icons","title":"Icons","text":"

TL;DR: Type and icon must both be matched. If only one is matched, the other falls back to the default.

"},{"location":"DEVICE_HEURISTICS/#priority-mechanics","title":"Priority Mechanics","text":""},{"location":"DEVICE_MANAGEMENT/","title":"NetAlertX - Device Management","text":"

The Main Info section is where most of the device identifiable information is stored and edited. Some of the information is autodetected via various plugins. Initial values for most of the fields can be specified in the NEWDEV plugin.

Note

You can multi-edit devices by selecting them in the main Devices view, from the Mainetence section, or via the CSV Export functionality under Maintenance. More info can be found in the Devices Bulk-editing docs.

"},{"location":"DEVICE_MANAGEMENT/#main-info","title":"Main Info","text":"

Note

Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.

"},{"location":"DEVICE_MANAGEMENT/#dummy-devices","title":"Dummy devices","text":"

You can create dummy devices from the Devices listing screen.

The MAC field and the Last IP field will then become editable.

Note

You can couple this with the ICMP plugin which can be used to monitor the status of these devices, if they are actual devices reachable with the ping command. If not, you can use a loopback IP address so they appear online, such as 0.0.0.0 or 127.0.0.1.

"},{"location":"DEVICE_MANAGEMENT/#copying-data-from-an-existing-device","title":"Copying data from an existing device.","text":"

To speed up device population you can also copy data from an existing device. This can be done from the Tools tab on the Device details.

"},{"location":"DEV_DEVCONTAINER/","title":"Devcontainer for NetAlertX Guide","text":"

This devcontainer is designed to mirror the production container environment as closely as possible, while providing a rich set of tools for development.

"},{"location":"DEV_DEVCONTAINER/#how-to-get-started","title":"How to Get Started","text":"
  1. Prerequisites:

  2. Launch the Devcontainer:

"},{"location":"DEV_DEVCONTAINER/#key-workflows-features","title":"Key Workflows & Features","text":"

Once you're inside the container, everything is set up for you.

"},{"location":"DEV_DEVCONTAINER/#1-services-frontend-backend","title":"1. Services (Frontend & Backend)","text":"

The container's startup script (.devcontainer/scripts/setup.sh) automatically starts the Nginx/PHP frontend and the Python backend. You can restart them at any time using the built-in tasks.

"},{"location":"DEV_DEVCONTAINER/#2-integrated-debugging-just-press-f5","title":"2. Integrated Debugging (Just Press F5!)","text":"

Debugging for both the Python backend and PHP frontend is pre-configured and ready to go.

"},{"location":"DEV_DEVCONTAINER/#3-common-tasks-f1-run-task","title":"3. Common Tasks (F1 -> Run Task)","text":"

We've created several VS Code Tasks to simplify common operations. Access them by pressing F1 and typing \"Tasks: Run Task\".

"},{"location":"DEV_DEVCONTAINER/#4-running-tests","title":"4. Running Tests","text":"

The environment includes pytest. You can run tests directly from the VS Code Test Explorer UI or by running pytest -q in the integrated terminal. The necessary PYTHONPATH is already configured so that tests can correctly import the server modules.

"},{"location":"DEV_DEVCONTAINER/#how-to-maintain-this-devcontainer","title":"How to Maintain This Devcontainer","text":"

The setup is designed to be easy to manage. Here are the core principles:

This setup provides a powerful and consistent foundation for all current and future contributors to NetAlertX.

"},{"location":"DEV_ENV_SETUP/","title":"Development Environment Setup","text":"

I truly appreciate all contributions! To help keep this project maintainable, this guide provides an overview of project priorities, key design considerations, and overall philosophy. It also includes instructions for setting up your environment so you can start contributing right away.

"},{"location":"DEV_ENV_SETUP/#development-guidelines","title":"Development Guidelines","text":"

Before starting development, please review the following guidelines.

"},{"location":"DEV_ENV_SETUP/#priority-order-highest-to-lowest","title":"Priority Order (Highest to Lowest)","text":"
  1. \ud83d\udd3c Fixing core bugs that lack workarounds
  2. \ud83d\udd35 Adding core functionality that unlocks other features (e.g., plugins)
  3. \ud83d\udd35 Refactoring to enable faster development
  4. \ud83d\udd3d UI improvements (PRs welcome, but low priority)
"},{"location":"DEV_ENV_SETUP/#design-philosophy","title":"Design Philosophy","text":"

The application architecture is designed for extensibility and maintainability. It relies heavily on configuration manifests via plugins and settings to dynamically build the UI and populate the application with data from various sources.

For details, see: - Plugins Development (includes video) - Settings System

Focus on core functionality and integrate with existing tools rather than reinventing the wheel.

Examples: - Using Apprise for notifications instead of implementing multiple separate gateways - Implementing regex-based validation instead of one-off validation for each setting

Note

UI changes have lower priority. PRs are welcome, but please keep them small and focused.

"},{"location":"DEV_ENV_SETUP/#development-environment-set-up","title":"Development Environment Set Up","text":"

Tip

There is also a ready to use devcontainer available.

The following steps will guide you to set up your environment for local development and to run a custom docker build on your system. For most changes the container doesn't need to be rebuild which speeds up the development significantly.

Note

Replace /development with the path where your code files will be stored. The default container name is netalertx so there might be a conflict with your running containers.

"},{"location":"DEV_ENV_SETUP/#1-download-the-code","title":"1. Download the code:","text":""},{"location":"DEV_ENV_SETUP/#2-create-a-dev-env_dev-file","title":"2. Create a DEV .env_dev file","text":"

touch /development/.env_dev && sudo nano /development/.env_dev

The file content should be following, with your custom values.

#--------------------------------\n#NETALERTX\n#--------------------------------\nTZ=Europe/Berlin\nPORT=22222    # make sure this port is unique on your whole network\nDEV_LOCATION=/development/NetAlertX\nAPP_DATA_LOCATION=/volume/docker_appdata\n# Make sure your GRAPHQL_PORT setting has a port that is unique on your whole host network\nAPP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"22223\"} \n# ALWAYS_FRESH_INSTALL=true # uncommenting this will always delete the content of /config and /db dirs on boot to simulate a fresh install\n
"},{"location":"DEV_ENV_SETUP/#3-create-db-and-config-dirs","title":"3. Create /db and /config dirs","text":"

Create a folder netalertx in the APP_DATA_LOCATION (in this example in /volume/docker_appdata) with 2 subfolders db and config.

"},{"location":"DEV_ENV_SETUP/#4-run-the-container","title":"4. Run the container","text":"

You can then modify the python script without restarting/rebuilding the container every time. Additionally, you can trigger a plugin run via the UI:

"},{"location":"DEV_ENV_SETUP/#tips","title":"Tips","text":"

A quick cheat sheet of useful commands.

"},{"location":"DEV_ENV_SETUP/#removing-the-container-and-image","title":"Removing the container and image","text":"

A command to stop, remove the container and the image (replace netalertx and netalertx-netalertx with the appropriate values)

"},{"location":"DEV_ENV_SETUP/#restart-the-server-backend","title":"Restart the server backend","text":"

Most code changes can be tested without rebuilding the container. When working on the python server backend, you only need to restart the server.

  1. You can usually restart the backend via Maintenance > Logs > Restart server

  1. If above doesn't work, SSH into the container and kill & restart the main script loop

  2. sudo docker exec -it netalertx /bin/bash

  3. pkill -f \"python /app/server\" && python /app/server &

  4. If none of the above work, restart the docker container.

  5. This is usually the last resort as sometimes the Docker engine becomes unresponsive and the whole engine needs to be restarted.

"},{"location":"DEV_ENV_SETUP/#contributing-pull-requests","title":"Contributing & Pull Requests","text":""},{"location":"DEV_ENV_SETUP/#before-submitting-a-pr-please-ensure","title":"Before submitting a PR, please ensure:","text":"

\u2714 Changes are backward-compatible with existing installs. \u2714 No unnecessary changes are made. \u2714 New features are reusable, not narrowly scoped. \u2714 Features are implemented via plugins if possible.

"},{"location":"DEV_ENV_SETUP/#mandatory-test-cases","title":"Mandatory Test Cases","text":"

Note

Always run all available tests as per the Testing documentation.

"},{"location":"DEV_PORTS_HOST_MODE/","title":"Dev Ports in Host Network Mode","text":"

When using \"--network=host\" in the devcontainer, VS Code's normal port forwarding model doesn't apply. All container ports are already on the host network namespace, so:

"},{"location":"DEV_PORTS_HOST_MODE/#recommended-pattern","title":"Recommended Pattern","text":"
  1. Only include debugger ports in forwardPorts: jsonc \"forwardPorts\": [5678, 9003]
  2. Do NOT list application service ports (e.g. 20211, 20212) there when in host mode.
  3. Use the helper task to enumerate current bindings:
  4. Run task: > Tasks: Run Task \u2192 [Dev Container] List NetAlertX Ports
"},{"location":"DEV_PORTS_HOST_MODE/#port-enumeration-script","title":"Port Enumeration Script","text":"

Script: scripts/list-ports.sh Outputs binding address, PID (if resolvable) and process name for key ports.

You can edit the PORTS variable inside that script to add/remove watched ports.

"},{"location":"DEV_PORTS_HOST_MODE/#xdebug-notes","title":"Xdebug Notes","text":"

Set in 99-xdebug.ini:

xdebug.client_host=127.0.0.1\nxdebug.client_port=9003\nxdebug.discover_client_host=1\n

Ensure your IDE is listening on 9003.

"},{"location":"DEV_PORTS_HOST_MODE/#troubleshooting","title":"Troubleshooting","text":"Symptom Cause Fix Waiting for port 20211 to free... repeats VS Code pre-bound the port via forwardPorts Remove the port from forwardPorts, rebuild, retry PHP request hangs at start Xdebug trying to connect to unresolved host (host.docker.internal) Use 127.0.0.1 or rely on discovery PORTS panel empty Expected in host mode Use the port enumeration task"},{"location":"DEV_PORTS_HOST_MODE/#future-improvements","title":"Future Improvements","text":""},{"location":"DOCKER_COMPOSE/","title":"NetAlertX and Docker Compose","text":"

Warning

\u26a0\ufe0f Important: The documentation has been recently updated and some instructions may have changed. If you are using the currently live production image, please follow the instructions on Docker Hub for building and running the container. These docs reflect the latest development version and may differ from the production image.

Great care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.Good care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.

Note

The container needs to run in network_mode:\"host\" to access Layer 2 networking such as arp, nmap and others. Due to lack of support for this feature, Windows host is not a supported operating system.

"},{"location":"DOCKER_COMPOSE/#baseline-docker-compose","title":"Baseline Docker Compose","text":"

There is one baseline for NetAlertX. That's the default security-enabled official distribution.

services:\n  netalertx:\n  #use an environmental variable to set host networking mode if needed\n    container_name: netalertx                       # The name when you docker contiainer ls\n    image: ghcr.io/jokob-sk/netalertx-dev:latest\n    network_mode: ${NETALERTX_NETWORK_MODE:-host}   # Use host networking for ARP scanning and other services\n\n    read_only: true                                 # Make the container filesystem read-only\n    cap_drop:                                       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:                                        # Add only the necessary capabilities\n      - NET_ADMIN                                   # Required for ARP scanning\n      - NET_RAW                                     # Required for raw socket operations\n      - NET_BIND_SERVICE                            # Required to bind to privileged ports (nbtscan)\n\n    volumes:\n      - type: volume                                # Persistent Docker-managed named volume for config + database\n        source: netalertx_data\n        target: /data                               # `/data/config` and `/data/db` live inside this mount\n        read_only: false\n\n    # Example custom local folder called /home/user/netalertx_data\n    # - type: bind\n    #   source: /home/user/netalertx_data\n    #   target: /data\n    #   read_only: false\n    # ... or use the alternative format\n    # - /home/user/netalertx_data:/data:rw\n\n      - type: bind                                  # Bind mount for timezone consistency\n        source: /etc/localtime                      # Alternatively add environment TZ: America/New York\n        target: /etc/localtime\n        read_only: true\n\n      # Mount your DHCP server file into NetAlertX for a plugin to access\n      # - path/on/host/to/dhcp.file:/resources/dhcp.file\n\n    # tmpfs mount consolidates writable state for a read-only container and improves performance\n    # uid=20211 and gid=20211 is the netalertx user inside the container\n    # mode=1700 grants rwx------ permissions to the netalertx user only\n    tmpfs:\n      # Comment out to retain logs between container restarts - this has a server performance impact.\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n\n      # Retain logs - comment out tmpfs /tmp if you want to retain logs between container restarts\n      # Please note if you remove the /tmp mount, you must create and maintain sub-folder mounts.\n      # - /path/on/host/log:/tmp/log\n      # - \"/tmp/api:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      # - \"/tmp/nginx:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      # - \"/tmp/run:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n\n    environment:\n      LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0}                   # Listen for connections on all interfaces\n      PORT: ${PORT:-20211}                                   # Application port\n      GRAPHQL_PORT: ${GRAPHQL_PORT:-20212}                   # GraphQL API port (passed into APP_CONF_OVERRIDE at runtime)\n  #    NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0}                 # 0=kill all services and restart if any dies. 1 keeps running dead services.\n\n    # Resource limits to prevent resource exhaustion\n    mem_limit: 2048m            # Maximum memory usage\n    mem_reservation: 1024m      # Soft memory limit\n    cpu_shares: 512             # Relative CPU weight for CPU contention scenarios\n    pids_limit: 512             # Limit the number of processes/threads to prevent fork bombs\n    logging:\n      driver: \"json-file\"       # Use JSON file logging driver\n      options:\n        max-size: \"10m\"         # Rotate log files after they reach 10MB\n        max-file: \"3\"           # Keep a maximum of 3 log files\n\n    # Always restart the container unless explicitly stopped\n    restart: unless-stopped\n\nvolumes:                        # Persistent volume for configuration and database storage\n  netalertx_data:\n

Run or re-run it:

docker compose up --force-recreate\n
"},{"location":"DOCKER_COMPOSE/#customize-with-environmental-variables","title":"Customize with Environmental Variables","text":"

You can override the default settings by passing environmental variables to the docker compose up command.

Example using a single variable:

This command runs NetAlertX on port 8080 instead of the default 20211.

PORT=8080 docker compose up\n

Example using all available variables:

This command demonstrates overriding all primary environmental variables: running with host networking, on port 20211, GraphQL on 20212, and listening on all IPs.

NETALERTX_NETWORK_MODE=host \\\nLISTEN_ADDR=0.0.0.0 \\\nPORT=20211 \\\nGRAPHQL_PORT=20212 \\\nNETALERTX_DEBUG=0 \\\ndocker compose up\n
"},{"location":"DOCKER_COMPOSE/#docker-composeyaml-modifications","title":"docker-compose.yaml Modifications","text":""},{"location":"DOCKER_COMPOSE/#modification-1-use-a-local-folder-bind-mount","title":"Modification 1: Use a Local Folder (Bind Mount)","text":"

By default, the baseline compose file uses \"named volumes\" (netalertx_config, netalertx_db). This is the preferred method because NetAlertX is designed to manage all configuration and database settings directly from its web UI. Named volumes let Docker handle this data cleanly without you needing to manage local file permissions or paths.

However, if you prefer to have direct, file-level access to your configuration for manual editing, a \"bind mount\" is a simple alternative. This tells Docker to use a specific folder from your computer (the \"host\") inside the container.

How to make the change:

  1. Choose a location on your computer. For example, /home/adam/netalertx-files.

  2. Create the subfolders: mkdir -p /home/adam/netalertx-files/config and mkdir -p /home/adam/netalertx-files/db.

  3. Edit your docker-compose.yml and find the volumes: section (the one inside the netalertx: service).

  4. Comment out (add a # in front) or delete the type: volume blocks for netalertx_config and netalertx_db.

  5. Add new lines pointing to your local folders.

Before (Using Named Volumes - Preferred):

...\n    volumes:\n      - netalertx_config:/data/config:rw #short-form volume (no /path is a short volume)\n      - netalertx_db:/data/db:rw\n...\n

After (Using a Local Folder / Bind Mount): Make sure to replace /home/adam/netalertx-files with your actual path. The format is <path_on_your_computer>:<path_inside_container>:<options>.

...\n    volumes:\n#      - netalertx_config:/data/config:rw\n#      - netalertx_db:/data/db:rw\n      - /home/adam/netalertx-files/config:/data/config:rw\n      - /home/adam/netalertx-files/db:/data/db:rw\n...\n

Now, any files created by NetAlertX in /data/config will appear in your /home/adam/netalertx-files/config folder.

This same method works for mounting other things, like custom plugins or enterprise NGINX files, as shown in the commented-out examples in the baseline file.

"},{"location":"DOCKER_COMPOSE/#example-configuration-summaries","title":"Example Configuration Summaries","text":"

Here are the essential modifications for common alternative setups.

"},{"location":"DOCKER_COMPOSE/#example-2-external-env-file-for-paths","title":"Example 2: External .env File for Paths","text":"

This method is useful for keeping your paths and other settings separate from your main compose file, making it more portable.

docker-compose.yml changes:

...\nservices:\n  netalertx:\n    environment:\n      - TZ=${TZ}\n      - PORT=${PORT}\n\n...\n

.env file contents:

TZ=Europe/Paris\nPORT=20211\nNETALERTX_NETWORK_MODE=host\nLISTEN_ADDR=0.0.0.0\nPORT=20211\nGRAPHQL_PORT=20212\n

Run with: sudo docker-compose --env-file /path/to/.env up

"},{"location":"DOCKER_COMPOSE/#example-3-docker-swarm","title":"Example 3: Docker Swarm","text":"

This is for deploying on a Docker Swarm cluster. The key differences from the baseline are the removal of network_mode: from the service, and the addition of deploy: and networks: blocks at both the service and top-level.

Here are the only changes you need to make to the baseline compose file to make it Swarm-compatible.

services:\n  netalertx:\n    ...\n    #    network_mode: ${NETALERTX_NETWORK_MODE:-host} # <-- DELETE THIS LINE\n    ...\n\n    # 2. ADD a 'networks:' block INSIDE the service to connect to the external host network.\n    networks:\n      - outside\n    # 3. ADD a 'deploy:' block to manage the service as a swarm replica.\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n\n\n# 4. ADD a new top-level 'networks:' block at the end of the file to define 'outside' as the external 'host' network.\nnetworks:\n  outside:\n    external:\n      name: \"host\"\n
"},{"location":"DOCKER_INSTALLATION/","title":"Docker Guide","text":""},{"location":"DOCKER_INSTALLATION/#netalertx-network-scanner-notification-framework","title":"NetAlertX - Network scanner & notification framework","text":"\ud83d\udcd1 Docker guide \ud83d\ude80 Releases \ud83d\udcda Docs \ud83d\udd0c Plugins \ud83e\udd16 Ask AI

Head to https://netalertx.com/ for more gifs and screenshots \ud83d\udcf7.

Note

There is also an experimental \ud83e\uddea bare-metal install method available.

"},{"location":"DOCKER_INSTALLATION/#basic-usage","title":"\ud83d\udcd5 Basic Usage","text":"

Warning

You will have to run the container on the host network and specify SCAN_SUBNETS unless you use other plugin scanners. The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.

docker run -d --rm --network=host \\\n  -v local_path/config:/data/config \\\n  -v local_path/db:/data/db \\\n  --mount type=tmpfs,target=/tmp/api \\\n  -e PUID=200 -e PGID=300 \\\n  -e TZ=Europe/Berlin \\\n  -e PORT=20211 \\\n  ghcr.io/jokob-sk/netalertx:latest\n

See alternative docked-compose examples.

"},{"location":"DOCKER_INSTALLATION/#docker-environment-variables","title":"Docker environment variables","text":"Variable Description Example Value PORT Port of the web interface 20211 PUID Application User UID 102 PGID Application User GID 82 LISTEN_ADDR Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. 0.0.0.0 TZ Time zone to display stats correctly. Find your time zone here Europe/Berlin LOADED_PLUGINS Default plugins to load. Plugins cannot be loaded with APP_CONF_OVERRIDE, you need to use this variable instead and then specify the plugins settings with APP_CONF_OVERRIDE. [\"PIHOLE\",\"ASUSWRT\"] APP_CONF_OVERRIDE JSON override for settings (except LOADED_PLUGINS). {\"SCAN_SUBNETS\":\"['192.168.1.0/24 --interface=eth1']\",\"GRAPHQL_PORT\":\"20212\"} ALWAYS_FRESH_INSTALL \u26a0 If true will delete the content of the /db & /config folders. For testing purposes. Can be coupled with watchtower to have an always freshly installed netalertx/netalertx-dev image. true

You can override the default GraphQL port setting GRAPHQL_PORT (set to 20212) by using the APP_CONF_OVERRIDE env variable. LOADED_PLUGINS and settings in APP_CONF_OVERRIDE can be specified via the UI as well.

"},{"location":"DOCKER_INSTALLATION/#docker-paths","title":"Docker paths","text":"

Note

See also Backup strategies.

Required Path Description \u2705 :/data/config Folder which will contain the app.conf & devices.csv (read about devices.csv) files \u2705 :/data/db Folder which will contain the app.db database file :/tmp/log Logs folder useful for debugging if you have issues setting up the container :/tmp/api A simple API endpoint containing static (but regularly updated) json and other files. Path configurable via NETALERTX_API environment variable. :/app/front/plugins/<plugin>/ignore_plugin Map a file ignore_plugin to ignore a plugin. Plugins can be soft-disabled via settings. More in the Plugin docs. :/etc/resolv.conf Use a custom resolv.conf file for better name resolution.

Use separate db and config directories, do not nest them.

"},{"location":"DOCKER_INSTALLATION/#initial-setup","title":"Initial setup","text":""},{"location":"DOCKER_INSTALLATION/#setting-up-scanners","title":"Setting up scanners","text":"

You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default ARPSCAN plugin, you have to specify at least one valid subnet and interface in the SCAN_SUBNETS setting. See the documentation on How to set up multiple SUBNETS, VLANs and what are limitations for troubleshooting and more advanced scenarios.

If you are running PiHole you can synchronize devices directly. Check the PiHole configuration guide for details.

Note

You can bulk-import devices via the CSV import method.

"},{"location":"DOCKER_INSTALLATION/#community-guides","title":"Community guides","text":"

You can read or watch several community configuration guides in Chinese, Korean, German, or French.

Please note these might be outdated. Rely on official documentation first.

"},{"location":"DOCKER_INSTALLATION/#common-issues","title":"Common issues","text":""},{"location":"DOCKER_INSTALLATION/#support-me","title":"\ud83d\udc99 Support me","text":"

\ud83d\udce7 Email me at netalertx@gmail.com if you want to get in touch or if I should add other sponsorship platforms.

"},{"location":"DOCKER_MAINTENANCE/","title":"The NetAlertX Container Operator's Guide","text":"

Warning

\u26a0\ufe0f Important: The documentation has been recently updated and some instructions may have changed. If you are using the currently live production image, please follow the instructions on Docker Hub for building and running the container. These docs reflect the latest development version and may differ from the production image.

This guide assumes you are starting with the official docker-compose.yml file provided with the project. We strongly recommend you start with or migrate to this file as your baseline and modify it to suit your specific needs (e.g., changing file paths). While there are many ways to configure NetAlertX, the default file is designed to meet the mandatory security baseline with layer-2 networking capabilities while operating securely and without startup warnings.

This guide provides direct, concise solutions for common NetAlertX administrative tasks. It is structured to help you identify a problem, implement the solution, and understand the details.

"},{"location":"DOCKER_MAINTENANCE/#guide-contents","title":"Guide Contents","text":"

Note

Other relevant resources - Fixing Permission Issues - Handling Backups - Accessing Application Logs

"},{"location":"DOCKER_MAINTENANCE/#task-using-a-local-folder-for-configuration","title":"Task: Using a Local Folder for Configuration","text":""},{"location":"DOCKER_MAINTENANCE/#problem","title":"Problem","text":"

You want to edit your app.conf and other configuration files directly from your host machine, instead of using a Docker-managed volume.

"},{"location":"DOCKER_MAINTENANCE/#solution","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. (Optional but Recommended) Back up your data using the method in Part 1. 3. Create a local folder on your host machine (e.g., /data/netalertx_config). 4. Edit docker-compose.yml:

yaml ... volumes: # - type: volume # source: netalertx_config # target: /data/config # read_only: false ... # Example custom local folder called /data/netalertx_config - type: bind source: /data/netalertx_config target: /data/config read_only: false ... 5. (Optional) Restore your backup. 6. Restart the container:

bash docker-compose up -d

"},{"location":"DOCKER_MAINTENANCE/#about-this-method","title":"About This Method","text":"

This replaces the Docker-managed volume with a \"bind mount.\" This is a direct mapping between a folder on your host computer (/data/netalertx_config) and a folder inside the container (/data/config), allowing you to edit the files directly.

"},{"location":"DOCKER_MAINTENANCE/#task-migrating-from-a-local-folder-to-a-docker-volume","title":"Task: Migrating from a Local Folder to a Docker Volume","text":""},{"location":"DOCKER_MAINTENANCE/#problem_1","title":"Problem","text":"

You are currently using a local folder (bind mount) for your configuration (e.g., /data/netalertx_config) and want to switch to the recommended Docker-managed volume (netalertx_config).

"},{"location":"DOCKER_MAINTENANCE/#solution_1","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. Edit docker-compose.yml:

yaml ... volumes: - type: volume source: netalertx_config target: /data/config read_only: false ... # Example custom local folder called /data/netalertx_config # - type: bind # source: /data/netalertx_config # target: /data/config # read_only: false ... 3. (Optional) Initialize the volume:

bash docker-compose up -d && docker-compose down 4. Run the migration command (replace /data/netalertx_config with your actual path):

bash docker run --rm -v netalertx_config:/config -v /data/netalertx_config:/local-config alpine \\ sh -c \"tar -C /local-config -c . | tar -C /config -x\" 5. Restart the container:

bash docker-compose up -d

"},{"location":"DOCKER_MAINTENANCE/#about-this-method_1","title":"About This Method","text":"

This uses a temporary alpine container that mounts both your source folder (/local-config) and destination volume (/config). The tar ... | tar ... command safely copies all files, including hidden ones, preserving structure.

"},{"location":"DOCKER_MAINTENANCE/#task-applying-a-custom-nginx-configuration","title":"Task: Applying a Custom Nginx Configuration","text":""},{"location":"DOCKER_MAINTENANCE/#problem_2","title":"Problem","text":"

You need to override the default Nginx configuration to add features like LDAP, SSO, or custom SSL settings.

"},{"location":"DOCKER_MAINTENANCE/#solution_2","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. Create your custom config file on your host (e.g., /data/my-netalertx.conf). 3. Edit docker-compose.yml:

yaml ... # Use a custom Enterprise-configured nginx config for ldap or other settings - /data/my-netalertx.conf:/tmp/nginx/active-config/netalertx.conf:ro ... 4. Restart the container:

bash docker-compose up -d

"},{"location":"DOCKER_MAINTENANCE/#about-this-method_2","title":"About This Method","text":"

Docker\u2019s bind mount overlays your host file (my-netalertx.conf) on top of the default file inside the container. The container remains read-only, but Nginx reads your file as if it were the default.

"},{"location":"DOCKER_MAINTENANCE/#task-mounting-additional-files-for-plugins","title":"Task: Mounting Additional Files for Plugins","text":""},{"location":"DOCKER_MAINTENANCE/#problem_3","title":"Problem","text":"

A plugin (like DHCPLSS) needs to read a file from your host machine (e.g., /var/lib/dhcp/dhcpd.leases).

"},{"location":"DOCKER_MAINTENANCE/#solution_3","title":"Solution","text":"
  1. Stop the container:

bash docker-compose down 2. Edit docker-compose.yml and add a new line under the volumes: section:

yaml ... volumes: ... # Mount for DHCPLSS plugin - /var/lib/dhcp/dhcpd.leases:/mnt/dhcpd.leases:ro ... 3. Restart the container:

bash docker-compose up -d 4. In the NetAlertX web UI, configure the plugin to read from:

/mnt/dhcpd.leases

"},{"location":"DOCKER_MAINTENANCE/#about-this-method_3","title":"About This Method","text":"

This maps your host file to a new, read-only (:ro) location inside the container. The plugin can then safely read this file without exposing anything else on your host filesystem.

"},{"location":"DOCKER_PORTAINER/","title":"Deploying NetAlertX in Portainer (via Stacks)","text":"

This guide shows you how to set up NetAlertX using Portainer\u2019s Stacks feature.

"},{"location":"DOCKER_PORTAINER/#1-prepare-your-host","title":"1. Prepare Your Host","text":"

Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace APP_FOLDER with your preferred location, for example /opt here:

mkdir -p /opt/netalertx/config\nmkdir -p /opt/netalertx/db\nmkdir -p /opt/netalertx/log\n
"},{"location":"DOCKER_PORTAINER/#2-open-portainer-stacks","title":"2. Open Portainer Stacks","text":"
  1. Log in to your Portainer UI.
  2. Navigate to Stacks \u2192 Add stack.
  3. Give your stack a name (e.g., netalertx).
"},{"location":"DOCKER_PORTAINER/#3-paste-the-stack-configuration","title":"3. Paste the Stack Configuration","text":"

Copy and paste the following YAML into the Web editor:

services:\n  netalertx:\n    container_name: netalertx\n\n    # Use this line for stable release\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"      \n\n    # Or, use this for the latest development build\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n\n    network_mode: \"host\"\n    restart: unless-stopped\n\n    volumes:\n      - ${APP_FOLDER}/netalertx/config:/data/config\n      - ${APP_FOLDER}/netalertx/db:/data/db\n      # Optional: logs (useful for debugging setup issues, comment out for performance)\n      - ${APP_FOLDER}/netalertx/log:/tmp/log\n\n      # API storage options:\n      # (Option 1) tmpfs (default, best performance)\n      - type: tmpfs\n        target: /tmp/api\n\n      # (Option 2) bind mount (useful for debugging)\n      # - ${APP_FOLDER}/netalertx/api:/tmp/api\n\n    environment:\n      - TZ=${TZ}\n      - PORT=${PORT}\n      - APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}\n
"},{"location":"DOCKER_PORTAINER/#4-configure-environment-variables","title":"4. Configure Environment Variables","text":"

In the Environment variables section of Portainer, add the following:

"},{"location":"DOCKER_PORTAINER/#5-deploy-the-stack","title":"5. Deploy the Stack","text":"
  1. Scroll down and click Deploy the stack.
  2. Portainer will pull the image and start NetAlertX.
  3. Once running, access the app at:
http://<your-docker-host-ip>:22022\n
"},{"location":"DOCKER_PORTAINER/#6-verify-and-troubleshoot","title":"6. Verify and Troubleshoot","text":"

Once the application is running, configure it by reading the initial setup guide, or troubleshoot common issues.

"},{"location":"DOCKER_SWARM/","title":"Docker Swarm Deployment Guide (IPvlan)","text":"

This guide describes how to deploy NetAlertX in a Docker Swarm environment using an ipvlan network. This enables the container to receive a LAN IP address directly, which is ideal for network monitoring.

"},{"location":"DOCKER_SWARM/#step-1-create-an-ipvlan-config-only-network-on-all-nodes","title":"\u2699\ufe0f Step 1: Create an IPvlan Config-Only Network on All Nodes","text":"

Run this command on each node in the Swarm.

docker network create -d ipvlan \\\n  --subnet=192.168.1.0/24 \\              # \ud83d\udd27 Replace with your LAN subnet\n  --gateway=192.168.1.1 \\                # \ud83d\udd27 Replace with your LAN gateway\n  -o ipvlan_mode=l2 \\\n  -o parent=eno1 \\                       # \ud83d\udd27 Replace with your network interface (e.g., eth0, eno1)\n  --config-only \\\n  ipvlan-swarm-config\n
"},{"location":"DOCKER_SWARM/#step-2-create-the-swarm-scoped-ipvlan-network-one-time-setup","title":"\ud83d\udda5\ufe0f Step 2: Create the Swarm-Scoped IPvlan Network (One-Time Setup)","text":"

Run this on one Swarm manager node only.

docker network create -d ipvlan \\\n  --scope swarm \\\n  --config-from ipvlan-swarm-config \\\n  swarm-ipvlan\n
"},{"location":"DOCKER_SWARM/#step-3-deploy-netalertx-with-docker-compose","title":"\ud83e\uddfe Step 3: Deploy NetAlertX with Docker Compose","text":"

Use the following Compose snippet to deploy NetAlertX with a static LAN IP assigned via the swarm-ipvlan network.

services:\n  netalertx:\n    image: ghcr.io/jokob-sk/netalertx:latest\n    ports:\n      - 20211:20211\n    volumes:\n      - /mnt/YOUR_SERVER/netalertx/config:/data/config:rw\n      - /mnt/YOUR_SERVER/netalertx/db:/netalertx/data/db:rw\n      - /mnt/YOUR_SERVER/netalertx/logs:/netalertx/tmp/log:rw\n    environment:\n      - TZ=Europe/London\n      - PORT=20211\n    networks:\n      swarm-ipvlan:\n        ipv4_address: 192.168.1.240     # \u26a0\ufe0f Choose a free IP from your LAN\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n      placement:\n        constraints:\n          - node.role == manager        # \ud83d\udd04 Or use: node.labels.netalertx == true\n\nnetworks:\n  swarm-ipvlan:\n    external: true\n
"},{"location":"DOCKER_SWARM/#notes","title":"\u2705 Notes","text":""},{"location":"FILE_PERMISSIONS/","title":"Managing File Permissions for NetAlertX on a Read-Only Container","text":"

Tip

NetAlertX runs in a secure, read-only Alpine-based container under a dedicated netalertx user (UID 20211, GID 20211). All writable paths are either mounted as persistent volumes or tmpfs filesystems. This ensures consistent file ownership and prevents privilege escalation.

"},{"location":"FILE_PERMISSIONS/#writable-paths","title":"Writable Paths","text":"

NetAlertX requires certain paths to be writable at runtime. These paths should be mounted either as host volumes or tmpfs in your docker-compose.yml or docker run command:

Path Purpose Notes /data/config Application configuration Persistent volume recommended /data/db Database files Persistent volume recommended /tmp/log Logs Lives under /tmp; optional host bind to retain logs /tmp/api API cache Subdirectory of /tmp /tmp/nginx/active-config Active nginx configuration override Mount /tmp (or override specific file) /tmp/run Runtime directories for nginx & PHP Subdirectory of /tmp /tmp PHP session save directory Backed by tmpfs for runtime writes

Mounting /tmp as tmpfs automatically covers all of its subdirectories (log, api, run, nginx/active-config, etc.).

All these paths will have UID 20211 / GID 20211 inside the container. Files on the host will appear owned by 20211:20211.

"},{"location":"FILE_PERMISSIONS/#fixing-permission-problems","title":"Fixing Permission Problems","text":"

Sometimes, permission issues arise if your existing host directories were created by a previous container running as root or another UID. The container will fail to start with \"Permission Denied\" errors.

"},{"location":"FILE_PERMISSIONS/#solution","title":"Solution","text":"
  1. Run the container once as root (--user \"0\") to allow it to correct permissions automatically:
docker run -it --rm --name netalertx --user \"0\" \\\n  -v local/path/config:/data/config \\\n  -v local/path/db:/data/db \\\n  ghcr.io/jokob-sk/netalertx:latest\n
  1. Wait for logs showing permissions being fixed. The container will then hang intentionally.
  2. Press Ctrl+C to stop the container.
  3. Start the container normally with your docker-compose.yml or docker run command.

The container startup script detects root and runs chown -R 20211:20211 on all volumes, fixing ownership for the secure netalertx user.

"},{"location":"FILE_PERMISSIONS/#example-docker-composeyml-with-tmpfs","title":"Example: docker-compose.yml with tmpfs","text":"
services:\n  netalertx:                                  \n    container_name: netalertx                \n    image: \"ghcr.io/jokob-sk/netalertx\"  \n    network_mode: \"host\"       \n    cap_add:\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n    restart: unless-stopped\n    volumes:\n      - local/path/config:/data/config         \n      - local/path/db:/data/db                 \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n    tmpfs:\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n

This setup ensures all writable paths are either in tmpfs or host-mounted, and the container never writes outside of controlled volumes.

"},{"location":"FIX_OFFLINE_DETECTION/","title":"Troubleshooting: Devices Show Offline When They Are Online","text":"

In some network setups, certain devices may intermittently appear as offline in NetAlertX, even though they are connected and responsive. This issue is often more noticeable with devices that have higher IP addresses within the subnet.

Note

Network presence graph showing increased drop outs before enabling additional ICMP scans and continuous online presence after following this guide. This graph shows a sudden spike in drop outs probably caused by a device software update.

"},{"location":"FIX_OFFLINE_DETECTION/#symptoms","title":"Symptoms","text":""},{"location":"FIX_OFFLINE_DETECTION/#cause","title":"Cause","text":"

This issue is typically related to scanning limitations:

"},{"location":"FIX_OFFLINE_DETECTION/#recommended-fixes","title":"Recommended Fixes","text":"

To improve presence accuracy and reduce false offline states:

"},{"location":"FIX_OFFLINE_DETECTION/#increase-arp-scan-timeout","title":"\u2705 Increase ARP Scan Timeout","text":"

Extend the ARP scanner timeout and DURATION to ensure full subnet coverage:

ARPSCAN_RUN_TIMEOUT=360\nARPSCAN_DURATION=30\n

Adjust based on your network size and device count.

"},{"location":"FIX_OFFLINE_DETECTION/#add-icmp-ping-scanning","title":"\u2705 Add ICMP (Ping) Scanning","text":"

Enable the ICMP scan plugin to complement ARP detection. ICMP is often more reliable for detecting active hosts, especially when ARP fails.

"},{"location":"FIX_OFFLINE_DETECTION/#use-multiple-detection-methods","title":"\u2705 Use Multiple Detection Methods","text":"

A combined approach greatly improves detection robustness:

This hybrid strategy increases reliability, especially for down detection and alerting. See other plugins that might be compatible with your setup. See benefits and drawbacks of individual scan methods in their respective docs.

"},{"location":"FIX_OFFLINE_DETECTION/#results","title":"Results","text":"

After increasing the ARP timeout and adding ICMP scanning (on select IP ranges), users typically report:

"},{"location":"FIX_OFFLINE_DETECTION/#summary","title":"Summary","text":"Setting Recommendation ARPSCAN_RUN_TIMEOUT Increase to ensure scans reach all IPs ICMP Scan Enable to detect devices ARP might miss Multi-method Scanning Use a mix of ARP, ICMP, and NMAP-based methods

Tip: Each environment is unique. Consider fine-tuning scan settings based on your network size, device behavior, and desired detection accuracy.

Let us know in the NetAlertX Discussions if you have further feedback or edge cases.

See also Remote Networks for more advanced setups.

"},{"location":"FRONTEND_DEVELOPMENT/","title":"Frontend development","text":"

This page contains tips for frontend development when extending NetAlertX. Guiding principles are:

  1. Maintainability
  2. Extendability
  3. Reusability
  4. Placing more functionality into Plugins and enhancing core Plugins functionality

That means that, when writing code, focus on reusing what's available instead of writing quick fixes. Or creating reusable functions, instead of bespoke functionaility.

"},{"location":"FRONTEND_DEVELOPMENT/#examples","title":"\ud83d\udd0d Examples","text":"

Some examples how to apply the above:

Example 1

I want to implement a scan fucntion. Options would be:

  1. To add a manual scan functionality to the deviceDetails.php page.
  2. To create a separate page that handles the execution of the scan.
  3. To create a configurable Plugin.

From the above, number 3 would be the most appropriate solution. Then followed by number 2. Number 1 would be approved only in special circumstances.

Example 2

I want to change the behavior of the application. Options to implement this could be:

  1. Hard-code the changes in the code.
  2. Implement the changes and add settings to influence the behavior in the initialize.py file so the user can adjust these.
  3. Implement the changes and add settings via a setting-only plugin.
  4. Implement the changes in a way so the behavior can be toggled on each plugin so the core capabilities of Plugins get extended.

From the above, number 4 would be the most appropriate solution. Then followed by number 3. Number 1 or 2 would be approved only in special circumstances.

"},{"location":"FRONTEND_DEVELOPMENT/#frontend-tips","title":"\ud83d\udca1 Frontend tips","text":"

Some useful frontend JavaScript functions:

Check the common.js file for more frontend functions.

"},{"location":"HELPER_SCRIPTS/","title":"NetAlertX Community Helper Scripts Overview","text":"

This page provides an overview of community-contributed scripts for NetAlertX. These scripts are not actively maintained and are provided as-is.

"},{"location":"HELPER_SCRIPTS/#community-scripts","title":"Community Scripts","text":"

You can find all scripts in this scripts GitHub folder.

Script Name Description Author Version Release Date New Devices Checkmk Script Checks for new devices in NetAlertX and reports status to Checkmk. N/A 1.0 08-Jan-2025 DB Cleanup Script Queries and removes old device-related entries from the database. laxduke 1.0 23-Dec-2024 OPNsense DHCP Lease Converter Retrieves DHCP lease data from OPNsense and converts it to dnsmasq format. im-redactd 1.0 24-Feb-2025"},{"location":"HELPER_SCRIPTS/#important-notes","title":"Important Notes","text":"

Note

These scripts are community-supplied and not actively maintained. Use at your own discretion.

For detailed usage instructions, refer to each script's documentation in each scripts GitHub folder.

"},{"location":"HOME_ASSISTANT/","title":"Home Assistant integration overview","text":"

NetAlertX comes with MQTT support, allowing you to show all detected devices as devices in Home Assistant. It also supplies a collection of stats, such as number of online devices.

Tip

You can install NetAlertX also as a Home Assistant addon via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

"},{"location":"HOME_ASSISTANT/#note","title":"\u26a0 Note","text":""},{"location":"HOME_ASSISTANT/#guide","title":"\ud83e\udded Guide","text":"

\ud83d\udca1 This guide was tested only with the Mosquitto MQTT broker

  1. Enable Mosquitto MQTT in Home Assistant by following the documentation

  2. Configure a user name and password on your broker.

  3. Note down the following details that you will need to configure NetAlertX:

  4. Open the NetAlertX > Settings > MQTT settings group

"},{"location":"HOME_ASSISTANT/#screenshots","title":"\ud83d\udcf7 Screenshots","text":""},{"location":"HOME_ASSISTANT/#troubleshooting","title":"Troubleshooting","text":"

If you can't see all devices detected, run sudo arp-scan --interface=eth0 192.168.1.0/24 (change these based on your setup, read Subnets docs for details). This command has to be executed the NetAlertX container, not in the Home Assistant container.

You can access the NetAlertX container via Portainer on your host or via ssh. The container name will be something like addon_db21ed7f_netalertx (you can copy the db21ed7f_netalertx part from the browser when accessing the UI of NetAlertX).

"},{"location":"HOME_ASSISTANT/#accessing-the-netalertx-container-via-ssh","title":"Accessing the NetAlertX container via SSH","text":"
  1. Log into your Home Assistant host via SSH
local@local:~ $ ssh pi@192.168.1.9\n
  1. Find the NetAlertX container name, in this case addon_db21ed7f_netalertx
pi@raspberrypi:~ $ sudo docker container ls | grep netalertx\n06c540d97f67   ghcr.io/alexbelgium/netalertx-armv7:25.3.1                   \"/init\"               6 days ago      Up 6 days (healthy)    addon_db21ed7f_netalertx\n
  1. SSH into the NetAlertX cointainer
pi@raspberrypi:~ $ sudo docker exec -it addon_db21ed7f_netalertx  /bin/sh\n/ #\n
  1. Execute a test asrp-scan scan
/ # sudo arp-scan --ignoredups --retry=6 192.168.1.0/24 --interface=eth0\nInterface: eth0, type: EN10MB, MAC: dc:a6:32:73:8a:b1, IPv4: 192.168.1.9\nStarting arp-scan 1.10.0 with 256 hosts (https://github.com/royhills/arp-scan)\n192.168.1.1     74:ac:b9:54:09:fb       Ubiquiti Networks Inc.\n192.168.1.21    74:ac:b9:ad:c3:30       Ubiquiti Networks Inc.\n192.168.1.58    1c:69:7a:a2:34:7b       EliteGroup Computer Systems Co., LTD\n192.168.1.57    f4:92:bf:a3:f3:56       Ubiquiti Networks Inc.\n...\n

If your result doesn't contain results similar to the above, double check your subnet, interface and if you are dealing with an inaccessible network segment, read the Remote networks documentation.

"},{"location":"HW_INSTALL/","title":"How to install NetAlertX on the server hardware","text":"

To download and install NetAlertX on the hardware/server directly use the curl or wget commands at the bottom of this page.

Note

This is an Experimental feature \ud83e\uddea and it relies on community support.

\ud83d\ude4f Looking for maintainers for this installation method \ud83d\ude42 Current community volunteers: - slammingprogramming - ingoratsdorf

There is no guarantee that the install script or any other script will gracefully handle other installed software. Data loss is a possibility, it is recommended to install NetAlertX using the supplied Docker image.

Warning

A warning to the installation method below: Piping to bash is controversial and may

be dangerous, as you cannot see the code that's about to be executed on your system.

If you trust this repo, you can download the install script via one of the methods (curl/wget) below and it will fo its best to install NetAlertX on your system.

Alternatively you can download the installation script from the repository and check the code yourself.

NetAlertX will be installed in /app and run on port number 20211.

Some facts about what and where something will be changed/installed by the HW install setup (may not contain everything!):

"},{"location":"HW_INSTALL/#limitations","title":"Limitations","text":"

Tip

If the below fails try grabbing and installing one of the previous releases and run the installation from the zip package.

These commands will download the install.debian12.sh script from the GitHub repository, make it executable with chmod, and then run it using ./install.debian12.sh.

Make sure you have the necessary permissions to execute the script.

"},{"location":"HW_INSTALL/#debian-12-bookworm","title":"\ud83d\udce5 Debian 12 (Bookworm)","text":""},{"location":"HW_INSTALL/#installation-via-curl","title":"Installation via curl","text":"
curl -o install.debian12.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh\n
"},{"location":"HW_INSTALL/#installation-via-wget","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh -O install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh\n
"},{"location":"HW_INSTALL/#ubuntu-24-noble-numbat","title":"\ud83d\udce5 Ubuntu 24 (Noble Numbat)","text":"

Note

Maintained by ingoratsdorf

"},{"location":"HW_INSTALL/#installation-via-curl_1","title":"Installation via curl","text":"
curl -o install.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh && sudo chmod +x install.sh && sudo ./install.sh\n
"},{"location":"HW_INSTALL/#installation-via-wget_1","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh -O install.sh && sudo chmod +x install.sh && sudo ./install.sh\n
"},{"location":"HW_INSTALL/#bare-metal-proxmox","title":"\ud83d\udce5 Bare Metal - Proxmox","text":"

Note

Use this on a clean LXC/VM for Debian 13 OR Ubuntu 24. The Scipt will detect OS and build acordingly. Maintained by JVKeller

"},{"location":"HW_INSTALL/#installation-via-wget_2","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh\n
"},{"location":"ICONS/","title":"Icons","text":""},{"location":"ICONS/#icons-overview","title":"Icons overview","text":"

Icons are used to visually distinguish devices in the app in most of the device listing tables and the network tree.

"},{"location":"ICONS/#icons-support","title":"Icons Support","text":"

Two types of icons are suported:

You can assign icons individually on each device in the Details tab.

"},{"location":"ICONS/#adding-new-icons","title":"Adding new icons","text":"
  1. You can get an SVG or a Font awesome HTML code

Copying the SVG (for example from iconify.design):

Copying the HTML code from Font Awesome.

  1. Navigate to the device you want to use the icon on and click the \"+\" icon:

  1. Paste in the copied HTML or SVG code and click \"OK\":

  1. \"Save\" the device

Note

If you want to mass-apply an icon to all devices of the same device type (Field: Type), you can click the mass-copy button (next to the \"+\" button). A confirmation prompt is displayed. If you proceed, icons of all devices set to the same device type as the current device, will be overwritten with the current device's icon.

"},{"location":"ICONS/#font-awesome-pro-icons","title":"Font Awesome Pro icons","text":"

If you own the premium package of Font Awesome icons you can mount it in your Docker container the following way:

/font-awesome:/app/front/lib/font-awesome:ro\n

You can use the full range of Font Awesome icons afterwards.

"},{"location":"INITIAL_SETUP/","title":"\u26a1 Quick Start Guide","text":"

Get NetAlertX up and running in a few simple steps.

"},{"location":"INITIAL_SETUP/#1-configure-scanner-plugins","title":"1. Configure Scanner Plugin(s)","text":"

Tip

Enable additional plugins under Settings \u2192 LOADED_PLUGINS. Make sure to save your changes and reload the page to activate them.

Initial configuration: ARPSCAN, INTRNT

Note

ARPSCAN and INTRNT scan the current network. You can complement them with other \ud83d\udd0d dev scanner plugins like NMAPDEV, or import devices using \ud83d\udce5 importer plugins. See the Subnet & VLAN Setup Guide and Remote Networks for advanced configurations.

"},{"location":"INITIAL_SETUP/#2-choose-a-publisher-plugin","title":"2. Choose a Publisher Plugin","text":"

Initial configuration: SMTP

Note

Configure your SMTP settings or enable additional \u25b6\ufe0f publisher plugins to send alerts. For more flexibility, try \ud83d\udcda _publisher_apprise, which supports over 80 notification services.

"},{"location":"INITIAL_SETUP/#3-set-up-a-network-topology-diagram","title":"3. Set Up a Network Topology Diagram","text":"

Initial configuration: The app auto-selects a root node (MAC internet) and attempts to identify other network devices by vendor or name.

Note

Visualize and manage your network using the Network Guide. Some plugins (e.g., UNFIMP) build the topology automatically, or you can use Custom Workflows to generate it based on your own rules.

"},{"location":"INITIAL_SETUP/#4-configure-notifications","title":"4. Configure Notifications","text":"

Initial configuration: Notifies on new_devices, down_devices, and events as defined in NTFPRCS_INCLUDED_SECTIONS.

Note

Notification settings support global, plugin-specific, and per-device rules. For fine-tuning, refer to the Notification Guide.

"},{"location":"INITIAL_SETUP/#5-set-up-workflows","title":"5. Set Up Workflows","text":"

Initial configuration: N/A

Note

Automate responses to device status changes, group management, topology updates, and more. See the Workflows Guide to simplify your network operations.

"},{"location":"INITIAL_SETUP/#6-backup-your-configuration","title":"6. Backup Your Configuration","text":"

Initial configuration: The CSVBCKP plugin creates a daily backup to /config/devices.csv.

Note

For a complete backup strategy, follow the Backup Guide.

"},{"location":"INITIAL_SETUP/#7-optional-create-custom-plugins","title":"7. (Optional) Create Custom Plugins","text":"

Initial configuration: N/A

Note

Build your own scanner, importer, or publisher plugin. See the Plugin Development Guide and included video tutorials.

"},{"location":"INITIAL_SETUP/#recommended-guides","title":"\ud83d\udcc1 Recommended Guides","text":""},{"location":"INITIAL_SETUP/#troubleshooting-help","title":"\ud83d\udee0\ufe0f Troubleshooting & Help","text":"

Before opening a new issue:

Let me know if you want a condensed README version, separate pages for each section, or UI copy based on this!

"},{"location":"INSTALLATION/","title":"Installation","text":""},{"location":"INSTALLATION/#installation-options","title":"Installation options","text":"

NetAlertX can be installed several ways. The best supported option is Docker, followed by a supervised Home Assistant instance, as an Unraid app, and lastly, on bare metal.

"},{"location":"INSTALLATION/#help","title":"Help","text":"

If facing issues, please spend a few minutes seraching.

Note

If you can't find a solution anywhere, ask in Discord if you think it's a quick question, otherwise open a new issue. Please fill in as much as possible to speed up the help process.

"},{"location":"LOGGING/","title":"Logging","text":"

NetAlertX comes with several logs that help to identify application issues. These include nginx logs, app, or plugin logs. For plugin-specific log debugging, please read the Debug Plugins guide.

Note

When debugging any issue, increase the LOG_LEVEL Setting as per the Debug tips documentation.

"},{"location":"LOGGING/#main-logs","title":"Main logs","text":"

You can find most of the logs exposed in the UI under Maintenance -> Logs.

If the UI is inaccessible, you can access them under /tmp/log.

In the Maintennace -> Logs you can Purge logs, download the full log file or Filter the lines with some substring to narrow down your search.

"},{"location":"LOGGING/#plugin-logging","title":"Plugin logging","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/). These files are processed at the end of the scan and deleted on successful processing.

The data is in most of the cases then displayed in the application under Integrations -> Plugins (or Device -> Plugins if the plugin is supplying device-specific data).

"},{"location":"LOGGING/#viewing-logs-on-the-file-system","title":"Viewing Logs on the File System","text":"

You cannot find any log files on the filesystem. The container is read-only and writes logs to a temporary in-memory filesystem (tmpfs) for security and performance. The application follows container best-practices by writing all logs to the standard output (stdout) and standard error (stderr) streams. Docker's logging driver (set in docker-compose.yml) captures this stream automatically, allowing you to access it with the docker logs <image_name> command.

bash docker logs netalertx * To watch the logs live (live feed):

bash docker logs -f netalertx

"},{"location":"LOGGING/#enabling-persistent-file-based-logs","title":"Enabling Persistent File-Based Logs","text":"

The default logs are erased every time the container restarts because they are stored in temporary in-memory storage (tmpfs). If you need to keep a persistent, file-based log history, follow the steps below.

Note

This might lead to performance degradation so this approach is only suggested when actively debugging issues. See the Performance optimization documentation for details.

  1. Stop the container:

bash docker-compose down

  1. Edit your docker-compose.yml file:

  2. Comment out the /tmp/log line under the tmpfs: section.

  3. Uncomment the \"Retain logs\" line under the volumes: section and set your desired host path.

yaml ... tmpfs: # - \"/tmp/log:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\" ... volumes: ... # Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts - /home/adam/netalertx_logs:/tmp/log ... 3. Restart the container:

bash docker-compose up -d

This change stops Docker from mounting a temporary in-memory volume at /tmp/log. Instead, it \"bind mounts\" a persistent folder from your host computer (e.g., /data/netalertx_logs) to that same location inside the container.

"},{"location":"MIGRATION/","title":"Migration","text":"

Warning

\u26a0\ufe0f Important: The documentation has been recently updated and some instructions may have changed. If you are using the currently live production image, please follow the instructions on Docker Hub for building and running the container. These docs reflect the latest development version and may differ from the production image.

When upgrading from older versions of NetAlertX (or PiAlert by jokob-sk), follow the migration steps below to ensure your data and configuration are properly transferred.

Tip

It's always important to have a backup strategy in place.

"},{"location":"MIGRATION/#migration-scenarios","title":"Migration scenarios","text":""},{"location":"MIGRATION/#10-manual-migration","title":"1.0 Manual Migration","text":"

You can migrate data manually, for example by exporting and importing devices using the CSV import method.

"},{"location":"MIGRATION/#11-migration-from-pialert-to-netalertx-v25524","title":"1.1 Migration from PiAlert to NetAlertX v25.5.24","text":""},{"location":"MIGRATION/#steps","title":"STEPS:","text":"

The application will automatically migrate the database, configuration, and all device information. A banner message will appear at the top of the web UI reminding you to update your Docker mount points.

  1. Stop the container
  2. Back up your setup
  3. Update the Docker file mount locations in your docker-compose.yml or docker run command (See below New Docker mount locations).
  4. Rename the DB and conf files to app.db and app.conf and place them in the appropriate location.
  5. Start the container

Tip

If you have trouble accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: cp -r /data/config /home/pi/pialert/config/old_backup_files. This should create a folder in the config directory called old_backup_files containing all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.

"},{"location":"MIGRATION/#new-docker-mount-locations","title":"New Docker mount locations","text":"

The internal application path in the container has changed from /home/pi/pialert to /app. Update your volume mounts as follows:

Old mount point New mount point /home/pi/pialert/config /data/config /home/pi/pialert/db /data/db

If you were mounting files directly, please note the file names have changed:

Old file name New file name pialert.conf app.conf pialert.db app.db

Note

The application automatically creates symlinks from the old database and config locations to the new ones, so data loss should not occur. Read the backup strategies guide to backup your setup.

"},{"location":"MIGRATION/#examples","title":"Examples","text":"

Examples of docker files with the new mount points.

"},{"location":"MIGRATION/#example-1-mapping-folders","title":"Example 1: Mapping folders","text":""},{"location":"MIGRATION/#old-docker-composeyml","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"jokobsk/pialert:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - /local/path/config:/home/pi/pialert/config  \n      - /local/path/db:/home/pi/pialert/db         \n      # (optional) useful for debugging if you have issues setting up the container\n      - /local/path/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \ud83c\udd95 This has changed  \n    container_name: netalertx                 # \ud83c\udd95 This has changed  \n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This has changed  \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - /local/path/config:/data/config         # \ud83c\udd95 This has changed  \n      - /local/path/db:/data/db                 # \ud83c\udd95 This has changed  \n      # (optional) useful for debugging if you have issues setting up the container\n      - /local/path/logs:/tmp/log        # \ud83c\udd95 This has changed  \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#example-2-mapping-files","title":"Example 2: Mapping files","text":"

Note

The recommendation is to map folders as in Example 1, map files directly only when needed.

"},{"location":"MIGRATION/#old-docker-composeyml_1","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"jokobsk/pialert:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - /local/path/config/pialert.conf:/home/pi/pialert/config/pialert.conf  \n      - /local/path/db/pialert.db:/home/pi/pialert/db/pialert.db         \n      # (optional) useful for debugging if you have issues setting up the container\n      - /local/path/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml_1","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \ud83c\udd95 This has changed  \n    container_name: netalertx                 # \ud83c\udd95 This has changed  \n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This has changed  \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - /local/path/config/app.conf:/data/config/app.conf # \ud83c\udd95 This has changed  \n      - /local/path/db/app.db:/data/db/app.db             # \ud83c\udd95 This has changed  \n      # (optional) useful for debugging if you have issues setting up the container\n      - /local/path/logs:/tmp/log                  # \ud83c\udd95 This has changed  \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#12-migration-from-netalertx-v25524","title":"1.2 Migration from NetAlertX v25.5.24","text":"

Versions before v25.10.1 require an intermediate migration through v25.5.24 to ensure database compatibility. Skipping this step may cause compatibility issues due to database schema changes introduced after v25.5.24.

"},{"location":"MIGRATION/#steps_1","title":"STEPS:","text":"
  1. Stop the container
  2. Back up your setup
  3. Upgrade to v25.5.24 by pinning the release version (See Examples below)
  4. Start the container and verify everything works as expected.
  5. Stop the container
  6. Upgrade to v25.10.1 by pinning the release version (See Examples below)
  7. Start the container and verify everything works as expected.
"},{"location":"MIGRATION/#examples_1","title":"Examples","text":"

Examples of docker files with the tagged version.

"},{"location":"MIGRATION/#example-1-mapping-folders_1","title":"Example 1: Mapping folders","text":""},{"location":"MIGRATION/#docker-composeyml-changes","title":"docker-compose.yml changes","text":"
services:\n  netalertx:                                  \n    container_name: netalertx                \n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This is important  \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - /local/path/config:/data/config         \n      - /local/path/db:/data/db                 \n      # (optional) useful for debugging if you have issues setting up the container\n      - /local/path/logs:/tmp/log        \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
services:\n  netalertx:                                  \n    container_name: netalertx                \n    image: \"ghcr.io/jokob-sk/netalertx:25.10.1\"         # \ud83c\udd95 This is important  \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - /local/path/config:/data/config         \n      - /local/path/db:/data/db                 \n      # (optional) useful for debugging if you have issues setting up the container\n      - /local/path/logs:/tmp/log        \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#13-migration-from-netalertx-v25101","title":"1.3 Migration from NetAlertX v25.10.1","text":"

Starting from v25.10.1, the container uses a more secure, read-only runtime environment, which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as tmpfs or permanent writable volumes, with sufficient access permissions.

"},{"location":"MIGRATION/#steps_2","title":"STEPS:","text":"
  1. Stop the container
  2. Back up your setup
  3. Upgrade to v25.10.1 by pinning the release version (See the example below)
services:\n  netalertx:                                  \n    container_name: netalertx                \n    image: \"ghcr.io/jokob-sk/netalertx:25.10.1\"         # \ud83c\udd95 This is important  \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - /local/path/config:/data/config         \n      - /local/path/db:/data/db                 \n      # (optional) useful for debugging if you have issues setting up the container\n      - /local/path/logs:/tmp/log        \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
  1. Start the container and verify everything works as expected.
  2. Stop the container.
  3. Perform a one-off migration to the latest netalertx image and 20211 user:

Note

The example below assumes your /config and /db folders are stored in local/path. Replace this path with your actual configuration directory. netalertx is the container name, which might differ from your setup.

docker run -it --rm --name netalertx --user \"0\" \\\n  -v /local/path/config:/data/config \\\n  -v /local/path/db:/data/db \\\n  ghcr.io/jokob-sk/netalertx:latest\n

..or alternatively execute:

sudo chown -R 20211:20211 /local/path/config\nsudo chown -R 20211:20211 /local/path/db\nsudo chmod -R a+rwx /local/path/\n
  1. Stop the container
  2. Update the docker-compose.yml as per example below.
services:\n  netalertx:                                  \n    container_name: netalertx                \n    image: \"ghcr.io/jokob-sk/netalertx\"         # \ud83c\udd95 This is important  \n    network_mode: \"host\"       \n    cap_add:                          # \ud83c\udd95 New line\n      - NET_RAW                       # \ud83c\udd95 New line \n      - NET_ADMIN                     # \ud83c\udd95 New line\n      - NET_BIND_SERVICE              # \ud83c\udd95 New line \n    restart: unless-stopped\n    volumes:\n      - /local/path/config:/data/config         \n      - /local/path/db:/data/db                 \n      # (optional) useful for debugging if you have issues setting up the container\n      #- /local/path/logs:/tmp/log        \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n    # \ud83c\udd95 New \"tmpfs\" section START \ud83d\udd3d\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n    # \ud83c\udd95 New \"tmpfs\" section END  \ud83d\udd3c\n
  1. Start the container and verify everything works as expected.
"},{"location":"NAME_RESOLUTION/","title":"Device Name Resolution","text":"

Name resolution in NetAlertX relies on multiple plugins to resolve device names from IP addresses. If you are seeing (name not found) as device names, follow these steps to diagnose and fix the issue.

Tip

Before proceeding, make sure Reverse DNS is enabled on your network. You can control how names are handled and cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

"},{"location":"NAME_RESOLUTION/#required-plugins","title":"Required Plugins","text":"

For best results, ensure the following name resolution plugins are enabled:

You can check which plugins are active in your Settings section and enable any that are missing.

There are other plugins that can supply device names as well, but they rely on bespoke hardware and services. See Plugins overview for details and look for plugins with name discovery (\ud83c\udd8e) features.

"},{"location":"NAME_RESOLUTION/#checking-logs","title":"Checking Logs","text":"

If names are not resolving, check the logs for errors or timeouts.

See how to explore logs in the Logging guide.

Logs will show which plugins attempted resolution and any failures encountered.

"},{"location":"NAME_RESOLUTION/#adjusting-timeout-settings","title":"Adjusting Timeout Settings","text":"

If resolution is slow or failing due to timeouts, increase the timeout settings in your configuration, for example.

NSLOOKUP_RUN_TIMEOUT = 30\n

Raising the timeout may help if your network has high latency or slow DNS responses.

"},{"location":"NAME_RESOLUTION/#checking-plugin-objects","title":"Checking Plugin Objects","text":"

Each plugin stores results in its respective object. You can inspect these objects to see if they contain valid name resolution data.

See Logging guide and Debug plugins guides for details.

If the object contains no results, the issue may be with DNS settings or network access.

"},{"location":"NAME_RESOLUTION/#improving-name-resolution","title":"Improving name resolution","text":"

For more details how to improve name resolution refer to the Reverse DNS Documentation.

"},{"location":"NETWORK_TREE/","title":"Network Topology","text":""},{"location":"NETWORK_TREE/#how-to-set-up-your-network-page","title":"How to Set Up Your Network Page","text":"

The Network page lets you map how devices connect \u2014 visually and logically. It\u2019s especially useful for planning infrastructure, assigning parent-child relationships, and spotting gaps.

To get started, you\u2019ll need to define at least one root node and mark certain devices as network nodes (like Switches or Routers).

Start by creating a root device with the MAC address Internet, if the application didn\u2019t create one already. This special MAC address (Internet) is required for the root network node \u2014 no other value is currently supported. Set its Type to a valid network type \u2014 such as Router or Gateway.

Tip

If you don\u2019t have one, use the Create new device button on the Devices page to add a root device.

"},{"location":"NETWORK_TREE/#quick-setup","title":"\u26a1 Quick Setup","text":"
  1. Open the device you want to use as a network node (e.g. a Switch).
  2. Set its Type to one of the following: AP, Firewall, Gateway, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN (Or add custom types under Settings \u2192 General \u2192 NETWORK_DEVICE_TYPES.)
  3. Save the device.
  4. Go to the Network page \u2014 supported device types will appear as tabs.
  5. Use the Assign button to connect unassigned devices to a network node.
  6. If the Port is 0 or empty, a Wi-Fi icon is shown. Otherwise, an Ethernet icon appears.

Note

Use bulk editing with CSV Export to fix Internet root assignments or update many devices at once.

"},{"location":"NETWORK_TREE/#example-setting-up-a-raspberrypi-as-a-switch","title":"Example: Setting up a raspberrypi as a Switch","text":"

Let\u2019s walk through setting up a device named raspberrypi to act as a network Switch that other devices connect through.

"},{"location":"NETWORK_TREE/#1-set-device-type-and-parent","title":"1. Set Device Type and Parent","text":"

Note

Only certain device types can act as network nodes: AP, Firewall, Gateway, Hypervisor, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN You can add custom types via the NETWORK_DEVICE_TYPES setting.

"},{"location":"NETWORK_TREE/#2-confirm-the-device-appears-as-a-network-node","title":"2. Confirm The Device Appears as a Network Node","text":"

You can confirm that raspberrypi now acts as a network device in two places:

"},{"location":"NETWORK_TREE/#3-assign-connected-devices","title":"3. Assign Connected Devices","text":"

Hovering over devices in the tree reveals connection details and tooltips for quick inspection.

Note

Selecting certain relationship types hides the device in the default device views. You can change this behavior by adjusting the UI_hide_rel_types setting, which by default is set to [\"nic\",\"virtual\"]. This means devices with devParentRelType set to nic or virtual will not be shown. All devices, regardless of relationship type, are always accessible in the All devices view.

"},{"location":"NETWORK_TREE/#summary","title":"\u2705 Summary","text":"

To configure devices on the Network page:

Need to reset or undo changes? Use backups or bulk editing to manage devices at scale. You can also automate device assignment with Workflows.

"},{"location":"NOTIFICATIONS/","title":"Notifications \ud83d\udce7","text":"

There are 4 ways how to influence notifications:

  1. On the device itself
  2. On the settings of the plugin
  3. Globally
  4. Ignoring devices

Note

It's recommended to use the same schedule interval for all plugins responsible for scanning devices, otherwise false positives might be reported if different devices are discovered by different plugins. Check the Settings > Enabled settings section for a warning:

"},{"location":"NOTIFICATIONS/#device-settings","title":"Device settings \ud83d\udcbb","text":"

The following device properties influence notifications. You can:

  1. Alert Events - Enables alerts of connections, disconnections, IP changes (down and down reconnected notifications are still sent even if this is disabled).
  2. Alert Down - Alerts when a device goes down. This setting overrides a disabled Alert Events setting, so you will get a notification of a device going down even if you don't have Alert Events ticked. Disabling this will disable down and down reconnected notifications on the device.
  3. Skip repeated notifications, if for example you know there is a temporary issue and want to pause the same notification for this device for a given time.
  4. Require NICs Online - Indicates whether this device should be considered online only if all associated NICs (devices with the nic relationship type) are online. If disabled, the device is considered online if any NIC is online. If a NIC is online it sets the parent (this) device's status to online irrespectivelly of the detected device's status. The Relationship type is set on the childern device.

Note

Please read through the NTFPRCS plugin documentation to understand how device and global settings influence the notification processing.

"},{"location":"NOTIFICATIONS/#plugin-settings","title":"Plugin settings \ud83d\udd0c","text":"

On almost all plugins there are 2 core settings, <plugin>_WATCH and <plugin>_REPORT_ON.

  1. <plugin>_WATCH specifies the columns which the app should watch. If watched columns change the device state is considered changed. This changed status is then used to decide to send out notifications based on the <plugin>_REPORT_ON setting.
  2. <plugin>_REPORT_ON let's you specify on which events the app should notify you. This is related to the <plugin>_WATCH setting. So if you select watched-changed and in <plugin>_WATCH you only select Watched_Value1, then a notification is triggered if Watched_Value1 is changed from the previous value, but no notification is send if Watched_Value2 changes.

Click the Read more in the docs. Link at the top of each plugin to get more details on how the given plugin works.

"},{"location":"NOTIFICATIONS/#global-settings","title":"Global settings \u2699","text":"

In Notification Processing settings, you can specify blanket rules. These allow you to specify exceptions to the Plugin and Device settings and will override those.

  1. Notify on (NTFPRCS_INCLUDED_SECTIONS) allows you to specify which events trigger notifications. Usual setups will have new_devices, down_devices, and possibly down_reconnected set. Including plugin (dependenton the Plugin <plugin>_WATCH and <plugin>_REPORT_ON settings) and events (dependent on the on-device Alert Events setting) might be too noisy for most setups. More info in the NTFPRCS plugin on what events these selections include.
  2. Alert down after (NTFPRCS_alert_down_time) is useful if you want to wait for some time before the system sends out a down notification for a device. This is related to the on-device Alert down setting and only devices with this checked will trigger a down notification.

You can filter out unwanted notifications globally. This could be because of a misbehaving device (GoogleNest/GoogleHub (See also ARPSAN docs and the --exclude-broadcast flag)) which flips between IP addresses, or because you want to ignore new device notifications of a certain pattern.

  1. Events Filter (NTFPRCS_event_condition) - Filter out Events from notifications.
  2. New Devices Filter (NTFPRCS_new_dev_condition) - Filter out New Devices from notifications, but log and keep a new device in the system.
"},{"location":"NOTIFICATIONS/#ignoring-devices","title":"Ignoring devices \ud83d\udcbb","text":"

You can completely ignore detected devices globally. This could be because your instance detects docker containers, you want to ignore devices from a specific manufacturer via MAC rules or you want to ignore devices on a specific IP range.

  1. Ignored MACs (NEWDEV_ignored_MACs) - List of MACs to ignore.
  2. Ignored IPs (NEWDEV_ignored_IPs) - List of IPs to ignore.
"},{"location":"PERFORMANCE/","title":"Performance Optimization Guide","text":"

There are several ways to improve the application's performance. The application has been tested on a range of devices, from a Raspberry Pi 4 to NAS and NUC systems. If you are running the application on a lower-end device, carefully fine-tune the performance settings to ensure an optimal user experience.

"},{"location":"PERFORMANCE/#common-causes-of-slowness","title":"Common Causes of Slowness","text":"

Performance issues are usually caused by:

The application performs regular maintenance and database cleanup. If these tasks fail, performance may degrade.

"},{"location":"PERFORMANCE/#database-and-log-file-size","title":"Database and Log File Size","text":"

A large database or oversized log files can slow down performance. You can check database and table sizes on the Maintenance page.

Note

"},{"location":"PERFORMANCE/#maintenance-plugins","title":"Maintenance Plugins","text":"

Two plugins help maintain the application\u2019s performance:

"},{"location":"PERFORMANCE/#1-database-cleanup-dbclnp","title":"1. Database Cleanup (DBCLNP)","text":""},{"location":"PERFORMANCE/#2-maintenance-maint","title":"2. Maintenance (MAINT)","text":""},{"location":"PERFORMANCE/#scan-frequency-and-coverage","title":"Scan Frequency and Coverage","text":"

Frequent scans increase resource usage, network traffic, and database read/write cycles.

"},{"location":"PERFORMANCE/#optimizations","title":"Optimizations","text":"

Some plugins have additional options to limit the number of scanned devices. If certain plugins take too long to complete, check if you can optimize scan times by selecting a scan range.

For example, the ICMP plugin allows you to specify a regular expression to scan only IPs that match a specific pattern.

"},{"location":"PERFORMANCE/#storing-temporary-files-in-memory","title":"Storing Temporary Files in Memory","text":"

On systems with slower I/O speeds, you can optimize performance by storing temporary files in memory. This primarily applies to the API directory (default: /tmp/api, configurable via NETALERTX_API) and /tmp/log folders.

Using tmpfs reduces disk writes and improves performance. However, it should be disabled if persistent logs or API data storage are required.

Below is an optimized docker-compose.yml snippet:

version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    # Uncomment the line below to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config:/data/config\n      - local/path/db:/data/db      \n      # (Optional) Useful for debugging setup issues\n      - local/path/logs:/tmp/log\n      # (API: OPTION 1) Store temporary files in memory (recommended for performance)\n      - type: tmpfs              # \u25c0 \ud83d\udd3a\n        target: /tmp/api         # \u25c0 \ud83d\udd3a\n      # (API: OPTION 2) Store API data on disk (useful for debugging)\n      # - local/path/api:/tmp/api\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n\n
"},{"location":"PIHOLE_GUIDE/","title":"Integration with PiHole","text":"

NetAlertX comes with 2 plugins suitable for integrating with your existing PiHole instance. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other plugins.

"},{"location":"PIHOLE_GUIDE/#approach-1-dhcplss-plugin-import-devices-from-the-pihole-dhcp-leases-file","title":"Approach 1: DHCPLSS Plugin - Import devices from the PiHole DHCP leases file","text":""},{"location":"PIHOLE_GUIDE/#settings","title":"Settings","text":"Setting Description Recommended value DHCPLSS_RUN When the plugin should run. schedule DHCPLSS_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * DHCPLSS_paths_to_check You need to map the value in this setting in the docker-compose.yml file. The in-container path must contain pihole so it's parsed correctly. ['/etc/pihole/dhcp.leases']

Check the DHCPLSS plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes","title":"docker-compose changes","text":"Path Description :/etc/pihole/dhcp.leases PiHole's dhcp.leases file. Required if you want to use PiHole dhcp.leases file. This has to be matched with a corresponding DHCPLSS_paths_to_check setting entry (the path in the container must contain pihole)"},{"location":"PIHOLE_GUIDE/#approach-2-pihole-plugin-import-devices-directly-from-the-pihole-database","title":"Approach 2: PIHOLE Plugin - Import devices directly from the PiHole database","text":"Setting Description Recommended value PIHOLE_RUN When the plugin should run. schedule PIHOLE_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * PIHOLE_DB_PATH You need to map the value in this setting in the docker-compose.yml file. /etc/pihole/pihole-FTL.db

Check the PiHole plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes_1","title":"docker-compose changes","text":"Path Description :/etc/pihole/pihole-FTL.db PiHole's pihole-FTL.db database file.

Check out other plugins that can help you discover more about your network or check how to scan Remote networks.

"},{"location":"PLUGINS/","title":"\ud83d\udd0c Plugins","text":"

NetAlertX supports additional plugins to extend its functionality, each with its own settings and options. Plugins can be loaded via the General -> LOADED_PLUGINS setting. For custom plugin development, refer to the Plugin development guide.

Note

Please check this Plugins debugging guide and the corresponding Plugin documentation in the below table if you are facing issues.

"},{"location":"PLUGINS/#quick-start","title":"\u26a1 Quick start","text":"

Tip

You can load additional Plugins via the General -> LOADED_PLUGINS setting. You need to save the settings for the new plugins to load (cache/page reload may be necessary).

  1. Pick your \ud83d\udd0d dev scanner plugin (e.g. ARPSCAN or NMAPDEV), or import devices into the application with an \ud83d\udce5 importer plugin. (See Enabling plugins below)
  2. Pick a \u25b6\ufe0f publisher plugin, if you want to send notifications. If you don't see a publisher you'd like to use, look at the \ud83d\udcda_publisher_apprise plugin which is a proxy for over 80 notification services.
  3. Setup your Network topology diagram
  4. Fine-tune Notifications
  5. Setup Workflows
  6. Backup your setup
  7. Contribute and Create custom plugins
"},{"location":"PLUGINS/#plugin-types","title":"Plugin types","text":"Plugin type Icon Description When to run Required Data source ? publisher \u25b6\ufe0f Sending notifications to services. on_notification \u2716 Script dev scanner \ud83d\udd0d Create devices in the app, manages online/offline device status. schedule \u2716 Script / SQLite DB name discovery \ud83c\udd8e Discovers names of devices via various protocols. before_name_updates, schedule \u2716 Script importer \ud83d\udce5 Importing devices from another service. schedule \u2716 Script / SQLite DB system \u2699 Providing core system functionality. schedule / always on \u2716/\u2714 Script / Template other \u267b Other plugins misc \u2716 Script / Template"},{"location":"PLUGINS/#features","title":"Features","text":"Icon Description \ud83d\udda7 Auto-imports the network topology diagram \ud83d\udd04 Has the option to sync some data back into the plugin source"},{"location":"PLUGINS/#available-plugins","title":"Available Plugins","text":"

Device-detecting plugins insert values into the CurrentScan database table. The plugins that are not required are safe to ignore, however, it makes sense to have at least some device-detecting plugins enabled, such as ARPSCAN or NMAPDEV.

ID Plugin docs Type Description Features Required APPRISE _publisher_apprise \u25b6\ufe0f Apprise notification proxy ARPSCAN arp_scan \ud83d\udd0d ARP-scan on current network AVAHISCAN avahi_scan \ud83c\udd8e Avahi (mDNS-based) name resolution ASUSWRT asuswrt_import \ud83d\udd0d Import connected devices from AsusWRT CSVBCKP csv_backup \u2699 CSV devices backup CUSTPROP custom_props \u2699 Managing custom device properties values Yes DBCLNP db_cleanup \u2699 Database cleanup Yes* DDNS ddns_update \u2699 DDNS update DHCPLSS dhcp_leases \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e Import devices from DHCP leases DHCPSRVS dhcp_servers \u267b DHCP servers DIGSCAN dig_scan \ud83c\udd8e Dig (DNS) Name resolution FREEBOX freebox \ud83d\udd0d/\u267b/\ud83c\udd8e Pull data and names from Freebox/Iliadbox ICMP icmp_scan \u267b ICMP (ping) status checker INTRNT internet_ip \ud83d\udd0d Internet IP scanner INTRSPD internet_speedtest \u267b Internet speed test IPNEIGH ipneigh \ud83d\udd0d Scan ARP (IPv4) and NDP (IPv6) tables LUCIRPC luci_import \ud83d\udd0d Import connected devices from OpenWRT MAINT maintenance \u2699 Maintenance of logs, etc. MQTT _publisher_mqtt \u25b6\ufe0f MQTT for synching to Home Assistant NBTSCAN nbtscan_scan \ud83c\udd8e Nbtscan (NetBIOS-based) name resolution NEWDEV newdev_template \u2699 New device template Yes NMAP nmap_scan \u267b Nmap port scanning & discovery NMAPDEV nmap_dev_scan \ud83d\udd0d Nmap dev scan on current network NSLOOKUP nslookup_scan \ud83c\udd8e NSLookup (DNS-based) name resolution NTFPRCS notification_processing \u2699 Notification processing Yes NTFY _publisher_ntfy \u25b6\ufe0f NTFY notifications OMDSDN omada_sdn_imp \ud83d\udce5/\ud83c\udd8e \u274c UNMAINTAINED use OMDSDNOPENAPI \ud83d\udda7 \ud83d\udd04 OMDSDNOPENAPI omada_sdn_openapi \ud83d\udce5/\ud83c\udd8e OMADA TP-Link import via OpenAPI \ud83d\udda7 PIHOLE pihole_scan \ud83d\udd0d/\ud83c\udd8e/\ud83d\udce5 Pi-hole device import & sync PUSHSAFER _publisher_pushsafer \u25b6\ufe0f Pushsafer notifications PUSHOVER _publisher_pushover \u25b6\ufe0f Pushover notifications SETPWD set_password \u2699 Set password Yes SMTP _publisher_email \u25b6\ufe0f Email notifications SNMPDSC snmp_discovery \ud83d\udd0d/\ud83d\udce5 SNMP device import & sync SYNC sync \ud83d\udd0d/\u2699/\ud83d\udce5 Sync & import from NetAlertX instances \ud83d\udda7 \ud83d\udd04 Yes TELEGRAM _publisher_telegram \u25b6\ufe0f Telegram notifications UI ui_settings \u267b UI specific settings Yes UNFIMP unifi_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import & sync \ud83d\udda7 UNIFIAPI unifi_api_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import (SM API, multi-site) VNDRPDT vendor_update \u2699 Vendor database update WEBHOOK _publisher_webhook \u25b6\ufe0f Webhook notifications WEBMON website_monitor \u267b Website down monitoring WOL wake_on_lan \u267b Automatic wake-on-lan

* The database cleanup plugin (DBCLNP) is not required but the app will become unusable after a while if not executed. \u274c marked for removal/unmaintained - looking for help \u231aIt's recommended to use the same schedule interval for all plugins responsible for discovering new devices.

"},{"location":"PLUGINS/#enabling-plugins","title":"Enabling plugins","text":"

Plugins can be enabled via Settings, and can be disabled as needed.

  1. Research which plugin you'd like to use, enable DISCOVER_PLUGINS and load the required plugins in Settings via the LOADED_PLUGINS setting.
  2. Save the changes and review the Settings of the newly loaded plugins.
  3. Change the <prefix>_RUN Setting to the recommended or custom value as per the documentation of the given setting
"},{"location":"PLUGINS/#disabling-unloading-and-ignoring-plugins","title":"Disabling, Unloading and Ignoring plugins","text":"
  1. Change the <prefix>_RUN Setting to disabled if you want to disable the plugin, but keep the settings
  2. If you want to speed up the application, you can unload the plugin by unselecting it in the LOADED_PLUGINS setting.
  3. You can completely ignore plugins by placing a ignore_plugin file into the plugin directory. Ignored plugins won't show up in the LOADED_PLUGINS setting.
"},{"location":"PLUGINS/#developing-new-custom-plugins","title":"\ud83c\udd95 Developing new custom plugins","text":"

If you want to develop a custom plugin, please read this Plugin development guide.

"},{"location":"PLUGINS_DEV/","title":"Creating a custom plugin","text":"

NetAlertX comes with a plugin system to feed events from third-party scripts into the UI and then send notifications, if desired. The highlighted core functionality this plugin system supports, is:

(Currently, update/overwriting of existing objects is only supported for devices via the CurrentScan table.)

Note

For a high-level overview of how the config.json is used and it's lifecycle check the config.json Lifecycle in NetAlertX Guide.

"},{"location":"PLUGINS_DEV/#watch-the-video","title":"\ud83c\udfa5 Watch the video:","text":"

Tip

Read this guide Development environment setup guide to set up your local environment for development. \ud83d\udc69\u200d\ud83d\udcbb

"},{"location":"PLUGINS_DEV/#screenshots","title":"\ud83d\udcf8 Screenshots","text":""},{"location":"PLUGINS_DEV/#use-cases","title":"Use cases","text":"

Example use cases for plugins could be:

If you wish to develop a plugin, please check the existing plugin structure. Once the settings are saved by the user they need to be removed from the app.conf file manually if you want to re-initialize them from the config.json of the plugin.

"},{"location":"PLUGINS_DEV/#disclaimer","title":"\u26a0 Disclaimer","text":"

Please read the below carefully if you'd like to contribute with a plugin yourself. This documentation file might be outdated, so double-check the sample plugins as well.

"},{"location":"PLUGINS_DEV/#plugin-file-structure-overview","title":"Plugin file structure overview","text":"

\u26a0\ufe0fFolder name must be the same as the code name value in: \"code_name\": \"<value>\" Unique prefix needs to be unique compared to the other settings prefixes, e.g.: the prefix APPRISE is already in use.

File Required (plugin type) Description config.json yes Contains the plugin configuration (manifest) including the settings available to the user. script.py no The Python script itself. You may call any valid linux command. last_result.<prefix>.log no The file used to interface between NetAlertX and the plugin. Required for a script plugin if you want to feed data into the app. Stored in the /api/log/plugins/ script.log no Logging output (recommended) README.md yes Any setup considerations or overview

More on specifics below.

"},{"location":"PLUGINS_DEV/#column-order-and-values-plugins-interface-contract","title":"Column order and values (plugins interface contract)","text":"

Important

Spend some time reading and trying to understand the below table. This is the interface between the Plugins and the core application. The application expets 9 or 13 values The first 9 values are mandatory. The next 4 values (HelpVal1 to HelpVal4) are optional. However, if you use any of these optional values (e.g., HelpVal1), you need to supply all optional values (e.g., HelpVal2, HelpVal3, and HelpVal4). If a value is not used, it should be padded with null.

Order Represented Column Value Required Description 0 Object_PrimaryID yes The primary ID used to group Events under. 1 Object_SecondaryID no Optional secondary ID to create a relationship beween other entities, such as a MAC address 2 DateTime yes When the event occured in the format 2023-01-02 15:56:30 3 Watched_Value1 yes A value that is watched and users can receive notifications if it changed compared to the previously saved entry. For example IP address 4 Watched_Value2 no As above 5 Watched_Value3 no As above 6 Watched_Value4 no As above 7 Extra no Any other data you want to pass and display in NetAlertX and the notifications 8 ForeignKey no A foreign key that can be used to link to the parent object (usually a MAC address) 9 HelpVal1 no (optional) A helper value 10 HelpVal2 no (optional) A helper value 11 HelpVal3 no (optional) A helper value 12 HelpVal4 no (optional) A helper value

Note

De-duplication is run once an hour on the Plugins_Objects database table and duplicate entries with the same value in columns Object_PrimaryID, Object_SecondaryID, Plugin (auto-filled based on unique_prefix of the plugin), UserData (can be populated with the \"type\": \"textbox_save\" column type) are removed.

"},{"location":"PLUGINS_DEV/#configjson-structure","title":"config.json structure","text":"

The config.json file is the manifest of the plugin. It contains mainly settings definitions and the mapping of Plugin objects to NetAlertX objects.

"},{"location":"PLUGINS_DEV/#execution-order","title":"Execution order","text":"

The execution order is used to specify when a plugin is executed. This is useful if a plugin has access and surfaces more information than others. If a device is detected by 2 plugins and inserted into the CurrentScan table, the plugin with the higher priority (e.g.: Level_0 is a higher priority than Level_1) will insert it's values first. These values (devices) will be then prioritized over any values inserted later.

{\n    \"execution_order\" : \"Layer_0\"\n}\n
"},{"location":"PLUGINS_DEV/#supported-data-sources","title":"Supported data sources","text":"

Currently, these data sources are supported (valid data_source value).

Name data_source value Needs to return a \"table\"* Overview (more details on this page below) Script script no Executes any linux command in the CMD setting. NetAlertX DB query app-db-query yes Executes a SQL query on the NetAlertX database in the CMD setting. Template template no Used to generate internal settings, such as default values. External SQLite DB query sqlite-db-query yes Executes a SQL query from the CMD setting on an external SQLite database mapped in the DB_PATH setting. Plugin type plugin_type no Specifies the type of the plugin and in which section the Plugin settings are displayed ( one of general/system/scanner/other/publisher ).

\ud83d\udd0eExample json \"data_source\": \"app-db-query\" If you want to display plugin objects or import devices into the app, data sources have to return a \"table\" of the exact structure as outlined above.

You can show or hide the UI on the \"Plugins\" page and \"Plugins\" tab for a plugin on devices via the show_ui property:

\ud83d\udd0eExample json \"show_ui\": true,

"},{"location":"PLUGINS_DEV/#data_source-script","title":"\"data_source\": \"script\"","text":"

If the data_source is set to script the CMD setting (that you specify in the settings array section in the config.json) contains an executable Linux command, that usually generates a last_result.<prefix>.log file (not required if you don't import any data into the app). The last_result.<prefix>.log file needs to be saved in /api/log/plugins.

Important

A lot of the work is taken care of by the plugin_helper.py library. You don't need to manage the last_result.<prefix>.log file if using the helper objects. Check other script.py of other plugins for details.

The content of the last_result.<prefix>.log file needs to contain the columns as defined in the \"Column order and values\" section above. The order of columns can't be changed. After every scan it should contain only the results from the latest scan/execution.

"},{"location":"PLUGINS_DEV/#last_resultprefixlog-examples","title":"\ud83d\udd0e last_result.prefix.log examples","text":"

Valid CSV:

\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|null|null|null|null\nhttps://www.duckduckgo.com|192.168.0.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|ff:ee:ff:11:ff:11\n\n

Invalid CSV with different errors on each line:

\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898||null|null|null\nhttps://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|\n|https://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|null\nnull|192.168.1.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine\nhttps://www.duckduckgo.com|192.168.1.1|2023-01-02 15:56:30|null|0.9898|null|null|Best search engine\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|||\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|\n\n
"},{"location":"PLUGINS_DEV/#data_source-app-db-query","title":"\"data_source\": \"app-db-query\"","text":"

If the data_source is set to app-db-query, the CMD setting needs to contain a SQL query rendering the columns as defined in the \"Column order and values\" section above. The order of columns is important.

This SQL query is executed on the app.db SQLite database file.

\ud83d\udd0eExample

SQL query example:

SQL SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, 'null' as Watched_Value3, 'null' as Watched_Value4, ns.Extra as Extra, dv.devMac as ForeignKey FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac

Required CMD setting example with above query (you can set \"type\": \"label\" if you want it to make uneditable in the UI):

json { \"function\": \"CMD\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, 'null' as Watched_Value3, 'null' as Watched_Value4, ns.Extra as Extra FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac\", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"SQL to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"This SQL query is used to populate the coresponding UI tables under the Plugins section.\" }] }

"},{"location":"PLUGINS_DEV/#data_source-template","title":"\"data_source\": \"template\"","text":"

In most cases, it is used to initialize settings. Check the newdev_template plugin for details.

"},{"location":"PLUGINS_DEV/#data_source-sqlite-db-query","title":"\"data_source\": \"sqlite-db-query\"","text":"

You can execute a SQL query on an external database connected to the current NetAlertX database via a temporary EXTERNAL_<unique prefix>. prefix.

For example for PIHOLE (\"unique_prefix\": \"PIHOLE\") it is EXTERNAL_PIHOLE.. The external SQLite database file has to be mapped in the container to the path specified in the DB_PATH setting:

\ud83d\udd0eExample

json ... { \"function\": \"DB_PATH\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [{\"readonly\": \"true\"}] ,\"transformers\": []}]}, \"default_value\":\"/etc/pihole/pihole-FTL.db\", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"DB Path\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"Required setting for the <code>sqlite-db-query</code> plugin type. Is used to mount an external SQLite database and execute the SQL query stored in the <code>CMD</code> setting.\" }] } ...

The actual SQL query you want to execute is then stored as a CMD setting, similar to a Plugin of the app-db-query plugin type. The format has to adhere to the format outlined in the \"Column order and values\" section above.

\ud83d\udd0eExample

Notice the EXTERNAL_PIHOLE. prefix.

json { \"function\": \"CMD\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"SELECT hwaddr as Object_PrimaryID, cast('http://' || (SELECT ip FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as VARCHAR(100)) || ':' || cast( SUBSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), 0, INSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, macVendor as Watched_Value1, lastQuery as Watched_Value2, (SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as Watched_Value3, 'null' as Watched_Value4, '' as Extra, hwaddr as ForeignKey FROM EXTERNAL_PIHOLE.network WHERE hwaddr NOT LIKE 'ip-%' AND hwaddr <> '00:00:00:00:00:00'; \", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"SQL to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"This SQL query is used to populate the coresponding UI tables under the Plugins section. This particular one selects data from a mapped PiHole SQLite database and maps it to the corresponding Plugin columns.\" }] }

"},{"location":"PLUGINS_DEV/#filters","title":"\ud83d\udd73 Filters","text":"

Plugin entries can be filtered in the UI based on values entered into filter fields. The txtMacFilter textbox/field contains the Mac address of the currently viewed device, or simply a Mac address that's available in the mac query string (<url>?mac=aa:22:aa:22:aa:22:aa).

Property Required Description compare_column yes Plugin column name that's value is used for comparison (Left side of the equation) compare_operator yes JavaScript comparison operator compare_field_id yes The id of a input text field containing a value is used for comparison (Right side of the equation) compare_js_template yes JavaScript code used to convert left and right side of the equation. {value} is replaced with input values. compare_use_quotes yes If true then the end result of the compare_js_template i swrapped in \" quotes. Use to compare strings.

Filters are only applied if a filter is specified, and the txtMacFilter is not undefined, or empty (--).

\ud83d\udd0eExample:

json \"data_filters\": [ { \"compare_column\" : \"Object_PrimaryID\", \"compare_operator\" : \"==\", \"compare_field_id\": \"txtMacFilter\", \"compare_js_template\": \"'{value}'.toString()\", \"compare_use_quotes\": true } ],

  1. On the pluginsCore.php page is an input field with the id txtMacFilter:

html <input class=\"form-control\" id=\"txtMacFilter\" type=\"text\" value=\"--\">

  1. This input field is initialized via the &mac= query string.

  2. The app then proceeds to use this Mac value from this field and compares it to the value of the Object_PrimaryID database field. The compare_operator is ==.

  3. Both values, from the database field Object_PrimaryID and from the txtMacFilter are wrapped and evaluated with the compare_js_template, that is '{value}.toString()'.

  4. compare_use_quotes is set to true so '{value}'.toString() is wrappe dinto \" quotes.

  5. This results in for example this code:

javascript // left part of the expression coming from compare_column and right from the input field // notice the added quotes ()\") around the left and right part of teh expression \"eval('ac:82:ac:82:ac:82\".toString()')\" == \"eval('ac:82:ac:82:ac:82\".toString()')\"

"},{"location":"PLUGINS_DEV/#mapping-the-plugin-results-into-a-database-table","title":"\ud83d\uddfa Mapping the plugin results into a database table","text":"

Plugin results are always inserted into the standard Plugin_Objects database table. Optionally, NetAlertX can take the results of the plugin execution, and insert these results into an additional database table. This is enabled by with the property \"mapped_to_table\" in the config.json file. The mapping of the columns is defined in the database_column_definitions array.

Note

If results are mapped to the CurrentScan table, the data is then included into the regular scan loop, so for example notification for devices are sent out.

\ud83d\udd0d Example:

For example, this approach is used to implement the DHCPLSS plugin. The script parses all supplied \"dhcp.leases\" files, gets the results in the generic table format outlined in the \"Column order and values\" section above, takes individual values, and inserts them into the CurrentScan database table in the NetAlertX database. All this is achieved by:

  1. Specifying the database table into which the results are inserted by defining \"mapped_to_table\": \"CurrentScan\" in the root of the config.json file as shown below:

json { \"code_name\": \"dhcp_leases\", \"unique_prefix\": \"DHCPLSS\", ... \"data_source\": \"script\", \"localized\": [\"display_name\", \"description\", \"icon\"], \"mapped_to_table\": \"CurrentScan\", ... } 2. Defining the target column with the mapped_to_column property for individual columns in the database_column_definitions array of the config.json file. For example in the DHCPLSS plugin, I needed to map the value of the Object_PrimaryID column returned by the plugin, to the cur_MAC column in the NetAlertX database table CurrentScan. Notice the \"mapped_to_column\": \"cur_MAC\" key-value pair in the sample below.

json { \"column\": \"Object_PrimaryID\", \"mapped_to_column\": \"cur_MAC\", \"css_classes\": \"col-sm-2\", \"show\": true, \"type\": \"device_mac\", \"default_value\":\"\", \"options\": [], \"localized\": [\"name\"], \"name\":[{ \"language_code\":\"en_us\", \"string\" : \"MAC address\" }] }

  1. That's it. The app takes care of the rest. It loops thru the objects discovered by the plugin, takes the results line-by-line, and inserts them into the database table specified in \"mapped_to_table\". The columns are translated from the generic plugin columns to the target table columns via the \"mapped_to_column\" property in the column definitions.

Note

You can create a column mapping with a default value via the mapped_to_column_data property. This means that the value of the given column will always be this value. That also means that the \"column\": \"NameDoesntMatter\" is not important as there is no database source column.

\ud83d\udd0d Example:

json { \"column\": \"NameDoesntMatter\", \"mapped_to_column\": \"cur_ScanMethod\", \"mapped_to_column_data\": { \"value\": \"DHCPLSS\" }, \"css_classes\": \"col-sm-2\", \"show\": true, \"type\": \"device_mac\", \"default_value\":\"\", \"options\": [], \"localized\": [\"name\"], \"name\":[{ \"language_code\":\"en_us\", \"string\" : \"MAC address\" }] }

"},{"location":"PLUGINS_DEV/#params","title":"params","text":"

Important

An esier way to access settings in scripts is the get_setting_value method. ```python from helper import get_setting_value

... NTFY_TOPIC = get_setting_value('NTFY_TOPIC') ...

```

The params array in the config.json is used to enable the user to change the parameters of the executed script. For example, the user wants to monitor a specific URL.

\ud83d\udd0e Example: Passing user-defined settings to a command. Let's say, you want to have a script, that is called with a user-defined parameter called urls:

bash root@server# python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com

{\n    \"params\" : [\n        {\n            \"name\"  : \"urls\",\n            \"type\"  : \"setting\",\n            \"value\" : \"WEBMON_urls_to_check\"\n        }]\n}\n
 {\n            \"function\": \"CMD\",\n            \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]},\n            \"default_value\":\"python3 /app/front/plugins/website_monitor/script.py urls={urls}\",\n            \"options\": [],\n            \"localized\": [\"name\", \"description\"],\n            \"name\" : [{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Command\"\n            }],\n            \"description\": [{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Command to run\"\n            }]\n        }\n

During script execution, the app will take the command \"python3 /app/front/plugins/website_monitor/script.py urls={urls}\", take the {urls} wildcard and replace it with the value from the WEBMON_urls_to_check setting. This is because:

  1. The app checks the params entries
  2. It finds \"name\" : \"urls\"
  3. Checks the type of the urls params and finds \"type\" : \"setting\"
  4. Gets the setting name from \"value\" : \"WEBMON_urls_to_check\"
  5. IMPORTANT: in the config.json this setting is identified by \"function\":\"urls_to_check\", not \"function\":\"WEBMON_urls_to_check\"
  6. You can also use a global setting, or a setting from a different plugin
  7. The app gets the user defined value from the setting with the code name WEBMON_urls_to_check
  8. let's say the setting with the code name WEBMON_urls_to_check contains 2 values entered by the user:
  9. WEBMON_urls_to_check=['https://google.com','https://duck.com']
  10. The app takes the value from WEBMON_urls_to_check and replaces the {urls} wildcard in the setting where \"function\":\"CMD\", so you go from:
  11. python3 /app/front/plugins/website_monitor/script.py urls={urls}
  12. to
  13. python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com

Below are some general additional notes, when defining params:

\ud83d\udd0eExample:

json { \"params\" : [{ \"name\" : \"ips\", \"type\" : \"sql\", \"value\" : \"SELECT devLastIP from DEVICES\", \"timeoutMultiplier\" : true }, { \"name\" : \"macs\", \"type\" : \"sql\", \"value\" : \"SELECT devMac from DEVICES\" }, { \"name\" : \"timeout\", \"type\" : \"setting\", \"value\" : \"NMAP_RUN_TIMEOUT\" }, { \"name\" : \"args\", \"type\" : \"setting\", \"value\" : \"NMAP_ARGS\", \"base64\" : true }] }

"},{"location":"PLUGINS_DEV/#setting-object-structure","title":"\u2699 Setting object structure","text":"

Note

The settings flow and when Plugin specific settings are applied is described under the Settings system.

Required attributes are:

Property Description \"function\" Specifies the function the setting drives or a simple unique code name. See Supported settings function values for options. \"type\" Specifies the form control used for the setting displayed in the Settings page and what values are accepted. Supported options include: - {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [{\"type\":\"password\"}] ,\"transformers\": [\"sha256\"]}]} \"localized\" A list of properties on the current JSON level that need to be localized. \"name\" Displayed on the Settings page. An array of localized strings. See Localized strings below. \"description\" Displayed on the Settings page. An array of localized strings. See Localized strings below. (optional) \"events\" Specifies whether to generate an execution button next to the input field of the setting. Supported values: - \"test\" - For notification plugins testing - \"run\" - Regular plugins testing (optional) \"override_value\" Used to determine a user-defined override for the setting. Useful for template-based plugins, where you can choose to leave the current value or override it with the value defined in the setting. (Work in progress) (optional) \"events\" Used to trigger the plugin. Usually used on the RUN setting. Not fully tested in all scenarios. Will show a play button next to the setting. After clicking, an event is generated for the backend in the Parameters database table to process the front-end event on the next run."},{"location":"PLUGINS_DEV/#ui-component-types-documentation","title":"UI Component Types Documentation","text":"

This section outlines the structure and types of UI components, primarily used to build HTML forms or interactive elements dynamically. Each UI component has a \"type\" which defines its structure, behavior, and rendering options.

"},{"location":"PLUGINS_DEV/#ui-component-json-structure","title":"UI Component JSON Structure","text":"

The UI component is defined as a JSON object containing a list of elements. Each element specifies how it should behave, with properties like elementType, elementOptions, and any associated transformers to modify the data. The example below demonstrates how a component with two elements (span and select) is structured:

{\n      \"function\": \"devIcon\",\n      \"type\": {\n        \"dataType\": \"string\",\n        \"elements\": [\n          {\n            \"elementType\": \"span\",\n            \"elementOptions\": [\n              { \"cssClasses\": \"input-group-addon iconPreview\" },\n              { \"getStringKey\": \"Gen_SelectToPreview\" },\n              { \"customId\": \"NEWDEV_devIcon_preview\" }\n            ],\n            \"transformers\": []\n          },\n          {\n            \"elementType\": \"select\",\n            \"elementHasInputValue\": 1,\n            \"elementOptions\": [\n              { \"cssClasses\": \"col-xs-12\" },\n              {\n                \"onChange\": \"updateIconPreview(this)\"\n              },\n              { \"customParams\": \"NEWDEV_devIcon,NEWDEV_devIcon_preview\" }\n            ],\n            \"transformers\": []\n          }          \n        ]\n      }\n}\n\n
"},{"location":"PLUGINS_DEV/#rendering-logic","title":"Rendering Logic","text":"

The code snippet provided demonstrates how the elements are iterated over to generate their corresponding HTML. Depending on the elementType, different HTML tags (like <select>, <input>, <textarea>, <button>, etc.) are created with the respective attributes such as onChange, my-data-type, and class based on the provided elementOptions. Events can also be attached to elements like buttons or select inputs.

"},{"location":"PLUGINS_DEV/#key-element-types","title":"Key Element Types","text":"

Each element may also have associated events (e.g., running a scan or triggering a notification) defined under Events.

"},{"location":"PLUGINS_DEV/#supported-settings-function-values","title":"Supported settings function values","text":"

You can have any \"function\": \"my_custom_name\" custom name, however, the ones listed below have a specific functionality attached to them.

Setting Description RUN (required) Specifies when the service is executed. Supported Options: - \"disabled\" - do not run - \"once\" - run on app start or on settings saved - \"schedule\" - if included, then a RUN_SCHD setting needs to be specified to determine the schedule - \"always_after_scan\" - run always after a scan is finished - \"before_name_updates\" - run before device names are updated (for name discovery plugins) - \"on_new_device\" - run when a new device is detected - \"before_config_save\" - run before the config is marked as saved. Useful if your plugin needs to modify the app.conf file. RUN_SCHD (required if you include \"schedule\" in the above RUN function) Cron-like scheduling is used if the RUN setting is set to schedule. CMD (required) Specifies the command that should be executed. API_SQL (not implemented) Generates a table_ + code_name + .json file as per API docs. RUN_TIMEOUT (optional) Specifies the maximum execution time of the script. If not specified, a default value of 10 seconds is used to prevent hanging. WATCH (optional) Specifies which database columns are watched for changes for this particular plugin. If not specified, no notifications are sent. REPORT_ON (optional) Specifies when to send a notification. Supported options are: - new means a new unique (unique combination of PrimaryId and SecondaryId) object was discovered. - watched-changed - means that selected Watched_ValueN columns changed - watched-not-changed - reports even on events where selected Watched_ValueN did not change - missing-in-last-scan - if the object is missing compared to previous scans

\ud83d\udd0e Example:

json { \"function\": \"RUN\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"select\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"disabled\", \"options\": [\"disabled\", \"once\", \"schedule\", \"always_after_scan\", \"on_new_device\"], \"localized\": [\"name\", \"description\"], \"name\" :[{ \"language_code\":\"en_us\", \"string\" : \"When to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"Enable a regular scan of your services. If you select <code>schedule</code> the scheduling settings from below are applied. If you select <code>once</code> the scan is run only once on start of the application (container) for the time specified in <a href=\\\"#WEBMON_RUN_TIMEOUT\\\"><code>WEBMON_RUN_TIMEOUT</code> setting</a>.\" }] }

"},{"location":"PLUGINS_DEV/#localized-strings","title":"\ud83c\udf0dLocalized strings","text":"

\ud83d\udd0e Example:

```json

{\n    \"language_code\":\"en_us\",\n    \"string\" : \"When to run\"\n}\n

```

"},{"location":"PLUGINS_DEV/#ui-settings-in-database_column_definitions","title":"UI settings in database_column_definitions","text":"

The UI will adjust how columns are displayed in the UI based on the resolvers definition of the database_column_definitions object. These are the supported form controls and related functionality:

Supported Types Description label Displays a column only. textarea_readonly Generates a read only text area and cleans up the text to display it somewhat formatted with new lines preserved. See below for information on threshold, replace. options Property Used in conjunction with types like threshold, replace, regex. options_params Property Used in conjunction with a \"options\": \"[{value}]\" template and text.select/list.select. Can specify SQL query (needs to return 2 columns SELECT devName as name, devMac as id) or Setting (not tested) to populate the dropdown. Check example below or have a look at the NEWDEV plugin config.json file. threshold The options array contains objects ordered from the lowest maximum to the highest. The corresponding hexColor is used for the value background color if it's less than the specified maximum but more than the previous one in the options array. replace The options array contains objects with an equals property, which is compared to the \"value.\" If the values are the same, the string in replacement is displayed in the UI instead of the actual \"value\". regex Applies a regex to the value. The options array contains objects with an type (must be set to regex) and param (must contain the regex itself) property. Type Definitions device_mac The value is considered to be a MAC address, and a link pointing to the device with the given MAC address is generated. device_ip The value is considered to be an IP address. A link pointing to the device with the given IP is generated. The IP is checked against the last detected IP address and translated into a MAC address, which is then used for the link itself. device_name_mac The value is considered to be a MAC address, and a link pointing to the device with the given MAC is generated. The link label is resolved as the target device name. url The value is considered to be a URL, so a link is generated. textbox_save Generates an editable and saveable text box that saves values in the database. Primarily intended for the UserData database column in the Plugins_Objects table. url_http_https Generates two links with the https and http prefix as lock icons. eval Evaluates as JavaScript. Use the variable value to use the given column value as input (e.g. '<b>${value}<b>' (replace ' with ` in your code) )

Note

Supports chaining. You can chain multiple resolvers with .. For example regex.url_http_https. This will apply the regex resolver and then the url_http_https resolver.

        \"function\": \"devType\",\n        \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"select\", \"elementOptions\" : [] ,\"transformers\": []}]},\n        \"maxLength\": 30,\n        \"default_value\": \"\",\n        \"options\": [\"{value}\"],\n        \"options_params\" : [\n            {\n                \"name\"  : \"value\",\n                \"type\"  : \"sql\",\n                \"value\" : \"SELECT '' as id, '' as name UNION SELECT devType as id, devType as name FROM (SELECT devType FROM Devices UNION SELECT 'Smartphone' UNION SELECT 'Tablet' UNION SELECT 'Laptop' UNION SELECT 'PC' UNION SELECT 'Printer' UNION SELECT 'Server' UNION SELECT 'NAS' UNION SELECT 'Domotic' UNION SELECT 'Game Console' UNION SELECT 'SmartTV' UNION SELECT 'Clock' UNION SELECT 'House Appliance' UNION SELECT 'Phone' UNION SELECT 'AP' UNION SELECT 'Gateway' UNION SELECT 'Firewall' UNION SELECT 'Switch' UNION SELECT 'WLAN' UNION SELECT 'Router' UNION SELECT 'Other') AS all_devices ORDER BY id;\"\n            },\n            {\n                \"name\"  : \"uilang\",\n                \"type\"  : \"setting\",\n                \"value\" : \"UI_LANG\"\n            }\n        ]\n
{\n            \"column\": \"Watched_Value1\",\n            \"css_classes\": \"col-sm-2\",\n            \"show\": true,\n            \"type\": \"threshold\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"maximum\": 199,\n                    \"hexColor\": \"#792D86\"                \n                },\n                {\n                    \"maximum\": 299,\n                    \"hexColor\": \"#5B862D\"\n                },\n                {\n                    \"maximum\": 399,\n                    \"hexColor\": \"#7D862D\"\n                },\n                {\n                    \"maximum\": 499,\n                    \"hexColor\": \"#BF6440\"\n                },\n                {\n                    \"maximum\": 599,\n                    \"hexColor\": \"#D33115\"\n                }\n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Status code\"\n                }]\n        },        \n        {\n            \"column\": \"Status\",\n            \"show\": true,\n            \"type\": \"replace\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"equals\": \"watched-not-changed\",\n                    \"replacement\": \"<i class='fa-solid fa-square-check'></i>\"\n                },\n                {\n                    \"equals\": \"watched-changed\",\n                    \"replacement\": \"<i class='fa-solid fa-triangle-exclamation'></i>\"\n                },\n                {\n                    \"equals\": \"new\",\n                    \"replacement\": \"<i class='fa-solid fa-circle-plus'></i>\"\n                }\n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Status\"\n                }]\n        },\n        {\n            \"column\": \"Watched_Value3\",\n            \"css_classes\": \"col-sm-1\",\n            \"show\": true,\n            \"type\": \"regex.url_http_https\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"type\": \"regex\",\n                    \"param\": \"([\\\\d.:]+)\"\n                }          \n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"HTTP/s links\"\n                },\n                {\n                \"language_code\":\"es_es\",\n                \"string\" : \"N/A\"\n                }]\n        }\n
"},{"location":"PLUGINS_DEV_CONFIG/","title":"Plugins Implementation Details","text":"

Plugins provide data to the NetAlertX core, which processes it to detect changes, discover new devices, raise alerts, and apply heuristics.

"},{"location":"PLUGINS_DEV_CONFIG/#overview-plugin-data-flow","title":"Overview: Plugin Data Flow","text":"
  1. Each plugin runs on a defined schedule.
  2. Aligning all plugin schedules is recommended so they execute in the same loop.
  3. During execution, all plugins write their collected data into the CurrentScan table.
  4. After all plugins complete, the CurrentScan table is evaluated to detect new devices, changes, and triggers.

Although plugins run independently, they contribute to the shared CurrentScan table. To inspect its contents, set LOG_LEVEL=trace and check for the log section:

================ CurrentScan table content ================\n
"},{"location":"PLUGINS_DEV_CONFIG/#configjson-lifecycle","title":"config.json Lifecycle","text":"

This section outlines how each plugin\u2019s config.json manifest is read, validated, and used by the core and plugins. It also describes plugin output expectations and the main plugin categories.

Tip

For detailed schema and examples, see the Plugin Development Guide.

"},{"location":"PLUGINS_DEV_CONFIG/#1-loading","title":"1. Loading","text":""},{"location":"PLUGINS_DEV_CONFIG/#2-validation","title":"2. Validation","text":""},{"location":"PLUGINS_DEV_CONFIG/#3-preparation","title":"3. Preparation","text":""},{"location":"PLUGINS_DEV_CONFIG/#4-execution","title":"4. Execution","text":""},{"location":"PLUGINS_DEV_CONFIG/#5-parsing","title":"5. Parsing","text":""},{"location":"PLUGINS_DEV_CONFIG/#6-mapping","title":"6. Mapping","text":"

Example: Object_PrimaryID \u2192 devMAC

"},{"location":"PLUGINS_DEV_CONFIG/#6a-plugin-output-contract","title":"6a. Plugin Output Contract","text":"

All plugins must follow the Plugin Interface Contract defined in PLUGINS_DEV.md. Output values are pipe-delimited in a fixed order.

"},{"location":"PLUGINS_DEV_CONFIG/#identifiers","title":"Identifiers","text":""},{"location":"PLUGINS_DEV_CONFIG/#watched-values-watched_value14","title":"Watched Values (Watched_Value1\u20134)","text":""},{"location":"PLUGINS_DEV_CONFIG/#extra-field-extra","title":"Extra Field (Extra)","text":""},{"location":"PLUGINS_DEV_CONFIG/#helper-values-helper_value13","title":"Helper Values (Helper_Value1\u20133)","text":""},{"location":"PLUGINS_DEV_CONFIG/#mapping","title":"Mapping","text":""},{"location":"PLUGINS_DEV_CONFIG/#7-persistence","title":"7. Persistence","text":""},{"location":"PLUGINS_DEV_CONFIG/#plugin-categories","title":"Plugin Categories","text":"

Plugins fall into several functional categories depending on their purpose and expected outputs.

"},{"location":"PLUGINS_DEV_CONFIG/#1-device-discovery-plugins","title":"1. Device Discovery Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#2-device-data-enrichment-plugins","title":"2. Device Data Enrichment Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#3-name-resolver-plugins","title":"3. Name Resolver Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#4-generic-plugins","title":"4. Generic Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#5-configuration-only-plugins","title":"5. Configuration-Only Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#post-processing","title":"Post-Processing","text":"

After persistence:

"},{"location":"PLUGINS_DEV_CONFIG/#summary","title":"Summary","text":"

The lifecycle of a plugin configuration is:

Load \u2192 Validate \u2192 Prepare \u2192 Execute \u2192 Parse \u2192 Map \u2192 Persist \u2192 Post-process

Each plugin must:

"},{"location":"RANDOM_MAC/","title":"Privacy & Random MAC's","text":"

Some operating systems incorporate randomize MAC addresses to improve privacy.

This functionality allows you to hide the real MAC of the device and assign a random MAC when we connect to WIFI networks.

This behavior is especially useful when connecting to WIFI's that we do not know, but it is totally useless when connecting to our own WIFI's or known networks.

I recommend disabling this on-device functionality when connecting our devices to our own WIFI's, this way, NetAlertX will be able to identify the device, and it will not identify it as a new device every so often (every time iOS or Android randomizes the MAC).

Random MACs are recognized by the characters \"2\", \"6\", \"A\", or \"E\" as the 2nd character in the Mac address. You can disable specific prefixes to be detected as random MAC addresses by specifying the UI_NOT_RANDOM_MAC setting.

"},{"location":"RANDOM_MAC/#windows","title":"Windows","text":""},{"location":"RANDOM_MAC/#ios","title":"IOS","text":""},{"location":"RANDOM_MAC/#android","title":"Android","text":""},{"location":"REMOTE_NETWORKS/","title":"Scanning Remote or Inaccessible Networks","text":"

By design, local network scanners such as arp-scan use ARP (Address Resolution Protocol) to map IP addresses to MAC addresses on the local network. Since ARP operates at Layer 2 (Data Link Layer), it typically works only within a single broadcast domain, usually limited to a single router or network segment.

Note

Ping and ARPSCAN use different protocols so even if you can ping devices it doesn't mean ARPSCAN can detect them.

To scan multiple locally accessible network segments, add them as subnets according to the subnets documentation. If ARPSCAN is not suitable for your setup, read on.

"},{"location":"REMOTE_NETWORKS/#complex-use-cases","title":"Complex Use Cases","text":"

The following network setups might make some devices undetectable with ARPSCAN. Check the specific setup to understand the cause and find potential workarounds to report on these devices.

"},{"location":"REMOTE_NETWORKS/#wi-fi-extenders","title":"Wi-Fi Extenders","text":"

Wi-Fi extenders typically create a separate network or subnet, which can prevent network scanning tools like arp-scan from detecting devices behind the extender.

Possible workaround: Scan the specific subnet that the extender uses, if it is separate from the main network.

"},{"location":"REMOTE_NETWORKS/#vpns","title":"VPNs","text":"

ARP operates at Layer 2 (Data Link Layer) and works only within a local area network (LAN). VPNs, which operate at Layer 3 (Network Layer), route traffic between networks, preventing ARP requests from discovering devices outside the local network.

VPNs use virtual interfaces (e.g., tun0, tap0) to encapsulate traffic, bypassing ARP-based discovery. Additionally, many VPNs use NAT, which masks individual devices behind a shared IP address.

Possible workaround: Configure the VPN to bridge networks instead of routing to enable ARP, though this depends on the VPN setup and security requirements.

"},{"location":"REMOTE_NETWORKS/#other-workarounds","title":"Other Workarounds","text":"

The following workarounds should work for most complex network setups.

"},{"location":"REMOTE_NETWORKS/#supplementing-plugins","title":"Supplementing Plugins","text":"

You can use supplementary plugins that employ alternate methods. Protocols used by the SNMPDSC or DHCPLSS plugins are widely supported on different routers and can be effective as workarounds. Check the plugins list to find a plugin that works with your router and network setup.

"},{"location":"REMOTE_NETWORKS/#multiple-netalertx-instances","title":"Multiple NetAlertX Instances","text":"

If you have servers in different networks, you can set up separate NetAlertX instances on those subnets and synchronize the results into one instance using the SYNC plugin.

"},{"location":"REMOTE_NETWORKS/#manual-entry","title":"Manual Entry","text":"

If you don't need to discover new devices and only need to report on their status (online, offline, down), you can manually enter devices and check their status using the ICMP plugin, which uses the ping command internally.

For more information on how to add devices manually (or dummy devices), refer to the Device Management documentation.

To create truly dummy devices, you can use a loopback IP address (e.g., 0.0.0.0 or 127.0.0.1) so they appear online.

"},{"location":"REMOTE_NETWORKS/#nmap-and-fake-mac-addresses","title":"NMAP and Fake MAC Addresses","text":"

Scanning remote networks with NMAP is possible (via the NMAPDEV plugin), but since it cannot retrieve the MAC address, you need to enable the NMAPDEV_FAKE_MAC setting. This will generate a fake MAC address based on the IP address, allowing you to track devices. However, this can lead to inconsistencies, especially if the IP address changes or a previously logged device is rediscovered. If this setting is disabled, only the IP address will be discovered, and devices with missing MAC addresses will be skipped.

Check the NMAPDEV plugin for details

"},{"location":"REVERSE_DNS/","title":"Reverse DNS","text":""},{"location":"REVERSE_DNS/#setting-up-better-name-discovery-with-reverse-dns","title":"Setting up better name discovery with Reverse DNS","text":"

If you are running a DNS server, such as AdGuard, set up Private reverse DNS servers for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.

Tip

Before proceeding, ensure that name resolution plugins are enabled. You can customize how names are cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

Example 1: Reverse DNS disabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58 ** server can't find 58.1.168.192.in-addr.arpa: NXDOMAIN

Example 2: Reverse DNS enabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58 45.1.168.192.in-addr.arpa name = jokob-NUC.localdomain.

"},{"location":"REVERSE_DNS/#enabling-reverse-dns-in-adguard","title":"Enabling reverse DNS in AdGuard","text":"
  1. Navigate to Settings -> DNS Settings
  2. Locate Private reverse DNS servers
  3. Enter your router IP address, such as 192.168.1.1
  4. Make sure you have Use private reverse DNS resolvers ticked.
  5. Click Apply to save your settings.
"},{"location":"REVERSE_DNS/#specifying-the-dns-in-the-container","title":"Specifying the DNS in the container","text":"

You can specify the DNS server in the docker-compose to improve name resolution on your network.

services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    restart: unless-stopped\n    volumes:\n      -  /home/netalertx/config:/data/config\n      -  /home/netalertx/db:/data/db\n      -  /home/netalertx/log:/tmp/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n    network_mode: host\n    dns:           # specifying the DNS servers used for the container\n      - 10.8.0.1\n      - 10.8.0.17\n
"},{"location":"REVERSE_DNS/#using-a-custom-resolvconf-file","title":"Using a custom resolv.conf file","text":"

You can configure a custom /etc/resolv.conf file in docker-compose.yml and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant resolv.conf man entry for details.

"},{"location":"REVERSE_DNS/#docker-composeyml","title":"docker-compose.yml:","text":"
version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    restart: unless-stopped\n    volumes:\n      - ./config/app.conf:/data/config/app.conf\n      - ./db:/data/db\n      - ./log:/tmp/log\n      - ./config/resolv.conf:/etc/resolv.conf                          # Mapping the /resolv.conf file for better name resolution\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n    ports:\n      - \"20211:20211\"\n    network_mode: host\n
"},{"location":"REVERSE_DNS/#configresolvconf","title":"./config/resolv.conf:","text":"

The most important below is the nameserver entry (you can add multiple):

nameserver 192.168.178.11\noptions edns0 trust-ad\nsearch example.com\n
"},{"location":"REVERSE_PROXY/","title":"Reverse Proxy Configuration","text":"

Submitted by amazing cvc90 \ud83d\ude4f

Note

There are various NGINX config files for NetAlertX, some for the bare-metal install, currently Debian 12 and Ubuntu 24 (netalertx.conf), and one for the docker container (netalertx.template.conf).

The first one you can find in the respective bare metal installer folder /app/install/\\<system\\>/netalertx.conf. The docker one can be found in the install folder. Map, or use, the one appropriate for your setup.

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-direct-path","title":"NGINX HTTP Configuration (Direct Path)","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 80; \n     server_name netalertx; \n     proxy_preserve_host on; \n     proxy_pass http://localhost:20211/; \n     proxy_pass_reverse http://localhost:20211/; \n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-sub-path","title":"NGINX HTTP Configuration (Sub Path)","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 80; \n     server_name netalertx; \n     proxy_preserve_host on; \n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;         \n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-sub-path-with-module-ngx_http_sub_module","title":"NGINX HTTP Configuration (Sub Path) with module ngx_http_sub_module","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 80; \n     server_name netalertx; \n     proxy_preserve_host on; \n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n      sub_filter_once off;\n      sub_filter_types *;\n      sub_filter 'href=\"/' 'href=\"/netalertx/';\n      sub_filter '(?>$host)/css' '/netalertx/css';\n      sub_filter '(?>$host)/js'  '/netalertx/js';\n      sub_filter '/img' '/netalertx/img';\n      sub_filter '/lib' '/netalertx/lib';\n      sub_filter '/php' '/netalertx/php';               \n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/

NGINX HTTPS Configuration (Direct Path)

  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 443; \n     server_name netalertx; \n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     proxy_preserve_host on; \n     proxy_pass http://localhost:20211/; \n     proxy_pass_reverse http://localhost:20211/; \n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/

NGINX HTTPS Configuration (Sub Path)

  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 443; \n     server_name netalertx; \n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;     \n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#nginx-https-configuration-sub-path-with-module-ngx_http_sub_module","title":"NGINX HTTPS Configuration (Sub Path) with module ngx_http_sub_module","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 443; \n     server_name netalertx; \n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n      sub_filter_once off;\n      sub_filter_types *;\n      sub_filter 'href=\"/' 'href=\"/netalertx/';\n      sub_filter '(?>$host)/css' '/netalertx/css';\n      sub_filter '(?>$host)/js'  '/netalertx/js';\n      sub_filter '/img' '/netalertx/img';\n      sub_filter '/lib' '/netalertx/lib';\n      sub_filter '/php' '/netalertx/php';       \n     }\n    }\n
  1. Check your config with nginx -t. If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#apache-http-configuration-direct-path","title":"Apache HTTP Configuration (Direct Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:80>\n         ServerName netalertx\n         ProxyPreserveHost On\n         ProxyPass / http://localhost:20211/\n         ProxyPassReverse / http://localhost:20211/\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#apache-http-configuration-sub-path","title":"Apache HTTP Configuration (Sub Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:80>\n         ServerName netalertx\n         location ^~ /netalertx/ {\n               ProxyPreserveHost On\n               ProxyPass / http://localhost:20211/\n               ProxyPassReverse / http://localhost:20211/\n         }\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#apache-https-configuration-direct-path","title":"Apache HTTPS Configuration (Direct Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:443>\n         ServerName netalertx\n         SSLEngine On\n         SSLCertificateFile /etc/ssl/certs/netalertx.pem\n         SSLCertificateKeyFile /etc/ssl/private/netalertx.key\n         ProxyPreserveHost On\n         ProxyPass / http://localhost:20211/\n         ProxyPassReverse / http://localhost:20211/\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

    a2ensite netalertx or service apache2 reload

  3. Once Apache restarts, you should be able to access the proxy website at https://netalertx/

"},{"location":"REVERSE_PROXY/#apache-https-configuration-sub-path","title":"Apache HTTPS Configuration (Sub Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:443> \n        ServerName netalertx\n        SSLEngine On \n        SSLCertificateFile /etc/ssl/certs/netalertx.pem\n        SSLCertificateKeyFile /etc/ssl/private/netalertx.key\n        location ^~ /netalertx/ {\n              ProxyPreserveHost On\n              ProxyPass / http://localhost:20211/\n              ProxyPassReverse / http://localhost:20211/\n        }\n    </VirtualHost>\n
  1. Check your config with httpd -t (or apache2ctl -t on Debian/Ubuntu). If there are any issues, it will tell you.

  2. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#reverse-proxy-example-by-using-linuxservers-swag-container","title":"Reverse proxy example by using LinuxServer's SWAG container.","text":"

Submitted by s33d1ing. \ud83d\ude4f

"},{"location":"REVERSE_PROXY/#linuxserverswag","title":"linuxserver/swag","text":"

In the SWAG container create /config/nginx/proxy-confs/netalertx.subfolder.conf with the following contents:

## Version 2023/02/05\n# make sure that your netalertx container is named netalertx\n# netalertx does not require a base url setting\n\n# Since NetAlertX uses a Host network, you may need to use the IP address of the system running NetAlertX for $upstream_app.\n\nlocation /netalertx {\n    return 301 $scheme://$host/netalertx/;\n}\n\nlocation ^~ /netalertx/ {\n    # enable the next two lines for http auth\n    #auth_basic \"Restricted\";\n    #auth_basic_user_file /config/nginx/.htpasswd;\n\n    # enable for ldap auth (requires ldap-server.conf in the server block)\n    #include /config/nginx/ldap-location.conf;\n\n    # enable for Authelia (requires authelia-server.conf in the server block)\n    #include /config/nginx/authelia-location.conf;\n\n    # enable for Authentik (requires authentik-server.conf in the server block)\n    #include /config/nginx/authentik-location.conf;\n\n    include /config/nginx/proxy.conf;\n    include /config/nginx/resolver.conf;\n\n    set $upstream_app netalertx;\n    set $upstream_port 20211;\n    set $upstream_proto http;\n\n    proxy_pass $upstream_proto://$upstream_app:$upstream_port;\n    proxy_set_header Accept-Encoding \"\";\n\n    proxy_redirect ~^/(.*)$ /netalertx/$1;\n    rewrite ^/netalertx/?(.*)$ /$1 break;\n\n    sub_filter_once off;\n    sub_filter_types *;\n\n    sub_filter 'href=\"/' 'href=\"/netalertx/';\n\n    sub_filter '(?>$host)/css' '/netalertx/css';\n    sub_filter '(?>$host)/js'  '/netalertx/js';\n\n    sub_filter '/img' '/netalertx/img';\n    sub_filter '/lib' '/netalertx/lib';\n    sub_filter '/php' '/netalertx/php';\n}\n

"},{"location":"REVERSE_PROXY/#traefik","title":"Traefik","text":"

Submitted by Isegrimm \ud83d\ude4f (based on this discussion)

Assuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.

Note: Everything in these configs assumes 'www.domain.com' as your domainname and 'section31' as an arbitrary name for your certificate setup. You will have to substitute these with your own.

Also, I use the prefix 'netalertx'. If you want to use another prefix, change it in these files: dynamic.toml and default.

Content of my yaml-file (this is the generic Traefik config, which defines which ports to listen on, redirect http to https and sets up the certificate process). It also contains Authelia, which I use for authentication. This part contains nothing specific to NetAlertX.

version: '3.8'\n\nservices:\n  traefik:\n    image: traefik\n    container_name: traefik\n    command:\n      - \"--api=true\"\n      - \"--api.insecure=true\"\n      - \"--api.dashboard=true\"\n      - \"--entrypoints.web.address=:80\"\n      - \"--entrypoints.web.http.redirections.entryPoint.to=websecure\"\n      - \"--entrypoints.web.http.redirections.entryPoint.scheme=https\"\n      - \"--entrypoints.websecure.address=:443\"\n      - \"--providers.file.filename=/traefik-config/dynamic.toml\"\n      - \"--providers.file.watch=true\"\n      - \"--log.level=ERROR\"\n      - \"--certificatesresolvers.section31.acme.email=postmaster@domain.com\"\n      - \"--certificatesresolvers.section31.acme.storage=/traefik-config/acme.json\"\n      - \"--certificatesresolvers.section31.acme.httpchallenge=true\"\n      - \"--certificatesresolvers.section31.acme.httpchallenge.entrypoint=web\"\n    ports:\n      - \"80:80\"\n      - \"443:443\"\n      - \"8080:8080\"\n    volumes:\n      - \"/var/run/docker.sock:/var/run/docker.sock:ro\"\n      - /appl/docker/traefik/config:/traefik-config\n    depends_on:\n      - authelia\n    restart: unless-stopped\n  authelia:\n    container_name: authelia\n    image: authelia/authelia:latest\n    ports:\n      - \"9091:9091\"\n    volumes:\n      - /appl/docker/authelia:/config\n    restart: u\n    nless-stopped\n

Snippet of the dynamic.toml file (referenced in the yml-file above) that defines the config for NetAlertX: The following are self-defined keywords, everything else is traefik keywords: - netalertx-router - netalertx-service - auth - netalertx-stripprefix

[http.routers]\n  [http.routers.netalertx-router]\n    entryPoints = [\"websecure\"]\n    rule = \"Host(`www.domain.com`) && PathPrefix(`/netalertx`)\"\n    service = \"netalertx-service\"\n    middlewares = \"auth,netalertx-stripprefix\"\n    [http.routers.netalertx-router.tls]\n       certResolver = \"section31\"\n       [[http.routers.netalertx-router.tls.domains]]\n         main = \"www.domain.com\"\n\n[http.services]\n  [http.services.netalertx-service]\n    [[http.services.netalertx-service.loadBalancer.servers]]\n      url = \"http://internal-ip-address:20211/\"\n\n[http.middlewares]\n  [http.middlewares.auth.forwardAuth]\n    address = \"http://authelia:9091/api/verify?rd=https://www.domain.com/authelia/\"\n    trustForwardHeader = true\n    authResponseHeaders = [\"Remote-User\", \"Remote-Groups\", \"Remote-Name\", \"Remote-Email\"]\n  [http.middlewares.netalertx-stripprefix.stripprefix]\n    prefixes = \"/netalertx\"\n    forceSlash = false\n\n

To make NetAlertX work with this setup I modified the default file at /etc/nginx/sites-available/default in the docker container by copying it to my local filesystem, adding the changes as specified by cvc90 and mounting the new file into the docker container, overwriting the original one. By mapping the file instead of changing the file in-place, the changes persist if an updated dockerimage is pulled. This is also a downside when the default file is updated, so I only use this as a temporary solution, until the dockerimage is updated with this change.

Default-file:

server {\n    listen 80 default_server;\n    root /var/www/html;\n    index index.php;\n    #rewrite /netalertx/(.*) / permanent;\n    add_header X-Forwarded-Prefix \"/netalertx\" always;\n    proxy_set_header X-Forwarded-Prefix \"/netalertx\";\n\n  location ~* \\.php$ {\n    fastcgi_pass unix:/run/php/php8.2-fpm.sock;\n    include         fastcgi_params;\n    fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;\n    fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;\n    fastcgi_connect_timeout 75;\n          fastcgi_send_timeout 600;\n          fastcgi_read_timeout 600;\n  }\n}\n

Mapping the updated file (on the local filesystem at /appl/docker/netalertx/default) into the docker container:

docker run -d --rm --network=host \\\n  --name=netalertx \\\n  -v /appl/docker/netalertx/config:/data/config \\\n  -v /appl/docker/netalertx/db:/data/db \\\n  -v /appl/docker/netalertx/default:/etc/nginx/sites-available/default \\\n  -e TZ=Europe/Amsterdam \\\n  -e PORT=20211 \\\n  ghcr.io/jokob-sk/netalertx:latest\n\n
"},{"location":"SECURITY/","title":"Security Considerations","text":""},{"location":"SECURITY/#responsibility-disclaimer","title":"\ud83e\udded Responsibility Disclaimer","text":"

NetAlertX provides powerful tools for network scanning, presence detection, and automation. However, it is up to you\u2014the deployer\u2014to ensure that your instance is properly secured.

This includes (but is not limited to): - Controlling who has access to the UI and API - Following network and container security best practices - Running NetAlertX only on networks where you have legal authorization - Keeping your deployment up to date with the latest patches

NetAlertX is not responsible for misuse, misconfiguration, or unsecure deployments. Always test and secure your setup before exposing it to the outside world.

"},{"location":"SECURITY/#securing-your-netalertx-instance","title":"\ud83d\udd10 Securing Your NetAlertX Instance","text":"

NetAlertX is a powerful network scanning and automation framework. With that power comes responsibility. It is your responsibility to secure your deployment, especially if you're running it outside a trusted local environment.

"},{"location":"SECURITY/#tldr-key-security-recommendations","title":"\u26a0\ufe0f TL;DR \u2013 Key Security Recommendations","text":""},{"location":"SECURITY/#access-control-with-vpn-or-tailscale","title":"\ud83d\udd17 Access Control with VPN (or Tailscale)","text":"

NetAlertX is designed to be run on private LANs, not the open internet.

Recommended: Use a VPN to access NetAlertX from remote locations.

"},{"location":"SECURITY/#tailscale-easy-vpn-alternative","title":"\u2705 Tailscale (Easy VPN Alternative)","text":"

Tailscale sets up a private mesh network between your devices. It's fast to configure and ideal for NetAlertX. \ud83d\udc49 Get started with Tailscale

"},{"location":"SECURITY/#web-ui-password-protection","title":"\ud83d\udd11 Web UI Password Protection","text":"

By default, NetAlertX does not require login. Before exposing the UI in any way:

  1. Enable password protection: ini SETPWD_enable_password=true SETPWD_password=your_secure_password

  2. Passwords are stored as SHA256 hashes

  3. Default password (if not changed): 123456 \u2014 change it ASAP!

To disable authenticated login, set SETPWD_enable_password=false in app.conf

"},{"location":"SECURITY/#additional-security-measures","title":"\ud83d\udd25 Additional Security Measures","text":""},{"location":"SECURITY/#docker-hardening-tips","title":"\ud83e\uddf1 Docker Hardening Tips","text":""},{"location":"SECURITY/#responsible-disclosure","title":"\ud83d\udce3 Responsible Disclosure","text":"

If you discover a vulnerability or security concern, please report it privately to:

\ud83d\udce7 jokob@duck.com

We take security seriously and will work to patch confirmed issues promptly. Your help in responsible disclosure is appreciated!

By following these recommendations, you can ensure your NetAlertX deployment is both powerful and secure.

"},{"location":"SECURITY_FEATURES/","title":"NetAlertX Security: A Layered Defense","text":"

Your network security monitor has the \"keys to the kingdom,\" making it a prime target for attackers. If it gets compromised, the game is over.

NetAlertX is engineered from the ground up to prevent this. It's not just an app; it's a purpose-built security appliance. Its core design is built on a zero-trust philosophy, which is a modern way of saying we assume a breach will happen and plan for it. This isn't a single \"lock on the door\"; it's a \"defense-in-depth\" strategy, more like a medieval castle with a moat, high walls, and guards at every door.

Here\u2019s a breakdown of the defensive layers you get, right out of the box using the default configuration.

"},{"location":"SECURITY_FEATURES/#feature-1-the-digital-concrete-filesystem","title":"Feature 1: The \"Digital Concrete\" Filesystem","text":"

Methodology: The core application and its system files are treated as immutable. Once built, the app's code is \"set in concrete,\" preventing attackers from modifying it or planting malware.

What's this mean to you: Even if an attacker gets in, they cannot modify the application code or plant malware. It's like the app is set in digital concrete.

"},{"location":"SECURITY_FEATURES/#feature-2-surgical-keycard-only-access","title":"Feature 2: Surgical, \"Keycard-Only\" Access","text":"

Methodology: The principle of least privilege is strictly enforced. Every process gets only the absolute minimum set of permissions needed for its specific job.

What's this mean to you: A security breach is firewalled. An attacker who gets into the web UI does not have the \"keycard\" to start scanning your network or take over the system. The breach is contained.

"},{"location":"SECURITY_FEATURES/#feature-3-attack-surface-amputation","title":"Feature 3: Attack Surface \"Amputation\"","text":"

Methodology: The potential attack surface is aggressively minimized by removing every non-essential tool an attacker would want to use.

What's this mean to you: An attacker who breaks in finds themselves in an empty room with no tools. They have no sudo to get more power, no package manager to download weapons, and no compilers to build new ones.

"},{"location":"SECURITY_FEATURES/#feature-4-self-cleaning-writable-areas","title":"Feature 4: \"Self-Cleaning\" Writable Areas","text":"

Methodology: All writable locations are treated as untrusted, temporary, and non-executable by default.

What's this mean to you: Any malicious file an attacker does manage to drop is written in invisible, non-permanent ink. The file is written to RAM, not disk, so it vaporizes the instant you restart the container. Even worse for them, the noexec flag means they can't even run the file in the first place.

"},{"location":"SECURITY_FEATURES/#feature-5-built-in-resource-guardrails","title":"Feature 5: Built-in Resource Guardrails","text":"

Methodology: The container is constrained by resource limits to function as a \"good citizen\" on the host system. This prevents a compromised or runaway process from consuming excessive resources, a common vector for Denial of Service (DoS) attacks.

What's this mean to you: NetAlertX is a \"good neighbor\" and can't be used to crash your host machine. Even if a process is compromised, it's in a digital straitjacket and cannot pull a \"denial of service\" attack by hogging all your CPU or memory.

"},{"location":"SECURITY_FEATURES/#feature-6-the-pre-flight-self-check","title":"Feature 6: The \"Pre-Flight\" Self-Check","text":"

Methodology: Before any services start, NetAlertX runs a comprehensive \"pre-flight\" check to ensure its own security and configuration are sound. It's like a built-in auditor who verifies its own defenses.

What's this mean to you: The system is self-aware and checks its own work. You get instant feedback if a setting is wrong, and you get peace of mind on every single boot knowing all these security layers are active and verified, all in about one second.

"},{"location":"SECURITY_FEATURES/#conclusion-security-by-default","title":"Conclusion: Security by Default","text":"

No single security control is a silver bullet. The robust security posture of NetAlertX is achieved through defense in depth, layering these methodologies.

An adversary must not only gain initial access but must also find a way to write a payload to a non-executable, in-memory location, without access to any standard system tools, sudo, or a package manager. And they must do this while operating as an unprivileged user in a resource-limited environment where the application code is immutable and actively checks its own integrity on every boot.

"},{"location":"SESSION_INFO/","title":"Sessions Section in Device View","text":"

The Sessions Section provides details about a device's connection history. This data is automatically detected and cannot be edited by the user.

"},{"location":"SESSION_INFO/#key-fields","title":"Key Fields","text":"
  1. Date and Time of First Connection
  2. Description: Displays the first detected connection time for the device.
  3. Editability: Uneditable (auto-detected).
  4. Source: Automatically captured when the device is first added to the system.

  5. Date and Time of Last Connection

  6. Description: Shows the most recent time the device was online.
  7. Editability: Uneditable (auto-detected).
  8. Source: Updated with every new connection event.

  9. Offline Devices with Missing or Conflicting Data

  10. Description: Handles cases where a device is offline but has incomplete or conflicting session data (e.g., missing start times).
  11. Handling: The system flags these cases for review and attempts to infer missing details.
"},{"location":"SESSION_INFO/#how-sessions-are-discovered-and-calculated","title":"How Sessions are Discovered and Calculated","text":""},{"location":"SESSION_INFO/#1-detecting-new-devices","title":"1. Detecting New Devices","text":"

When a device is first detected in the network, the system logs it in the events table:

INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail) SELECT cur_MAC, cur_IP, '{startTime}', 'New Device', cur_Vendor, 1 FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Devices WHERE devMac = cur_MAC)

"},{"location":"SESSION_INFO/#2-logging-connection-sessions","title":"2. Logging Connection Sessions","text":"

When a new connection is detected, the system creates a session record:

INSERT INTO Sessions (ses_MAC, ses_IP, ses_EventTypeConnection, ses_DateTimeConnection, ses_EventTypeDisconnection, ses_DateTimeDisconnection, ses_StillConnected, ses_AdditionalInfo) SELECT cur_MAC, cur_IP, 'Connected', '{startTime}', NULL, NULL, 1, cur_Vendor FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Sessions WHERE ses_MAC = cur_MAC)

"},{"location":"SESSION_INFO/#3-handling-missing-or-conflicting-data","title":"3. Handling Missing or Conflicting Data","text":""},{"location":"SESSION_INFO/#4-updating-sessions","title":"4. Updating Sessions","text":"

The session information is then used to display the device presence under Monitoring -> Presence.

"},{"location":"SETTINGS_SYSTEM/","title":"Settings","text":""},{"location":"SETTINGS_SYSTEM/#setting-system","title":"\u2699 Setting system","text":"

This is an explanation how settings are handled intended for anyone thinking about writing their own plugin or contributing to the project.

If you are a user of the app, settings have a detailed description in the Settings section of the app. Open an issue if you'd like to clarify any of the settings.

"},{"location":"SETTINGS_SYSTEM/#data-storage","title":"\ud83d\udee2 Data storage","text":"

The source of truth for user-defined values is the app.conf file. Editing the file makes the App overwrite values in the Settings database table and in the table_settings.json file.

"},{"location":"SETTINGS_SYSTEM/#settings-database-table","title":"Settings database table","text":"

The Settings database table contains settings for App run purposes. The table is recreated every time the App restarts. The settings are loaded from the source-of-truth, that is the app.conf file. A high-level overview on the database structure can be found in the database documentation.

"},{"location":"SETTINGS_SYSTEM/#table_settingsjson","title":"table_settings.json","text":"

This is the API endpoint that reflects the state of the Settings database table. Settings can be accessed with the:

The json file is also cached on the client-side local storage of the browser.

"},{"location":"SETTINGS_SYSTEM/#appconf","title":"app.conf","text":"

Note

This is the source of truth for settings. User-defined values in this files always override default values specified in the Plugin definition.

The App generates two app.conf entries for every setting (Since version 23.8+). One entry is the setting value, the second is the __metadata associated with the setting. This __metadata entry contains the full setting definition in JSON format. Currently unused, but intended to be used in future to extend the Settings system.

"},{"location":"SETTINGS_SYSTEM/#plugin-settings","title":"Plugin settings","text":"

Note

This is the preferred way adding settings going forward. I'll be likely migrating all app settings into plugin-based settings.

Plugin settings are loaded dynamically from the config.json of individual plugins. If a setting isn't defined in the app.conf file, it is initialized via the default_value property of a setting from the config.json file. Check the Plugins documentation, section \u2699 Setting object structure for details on the structure of the setting.

"},{"location":"SETTINGS_SYSTEM/#settings-process-flow","title":"Settings Process flow","text":"

The process flow is mostly managed by the initialise.py file.

The script is responsible for reading user-defined values from a configuration file (app.conf), initializing settings, and importing them into a database. It also handles plugins and their configurations.

Here's a high-level description of the code:

  1. Function Definitions:
  2. ccd: This function is used to handle user-defined settings and configurations. It takes several parameters related to the setting's name, default value, input type, options, group, and more. It saves the settings and their metadata in different lists (conf.mySettingsSQLsafe and conf.mySettings).

  3. importConfigs: This function is the main entry point of the script. It imports user settings from a configuration file, processes them, and saves them to the database.

  4. read_config_file: This function reads the configuration file (app.conf) and returns a dictionary containing the key-value pairs from the file.

  5. Importing Configuration and Initializing Settings:

  6. The importConfigs function starts by checking the modification time of the configuration file to determine if it needs to be re-imported. If the file has not been modified since the last import, the function skips the import process.

  7. The function reads the configuration file using the read_config_file function, which returns a dictionary of settings.

  8. The script then initializes various user-defined settings using the ccd function, based on the values read from the configuration file. These settings are categorized into groups such as \"General,\" \"Email,\" \"Webhooks,\" \"Apprise,\" and more.

  9. Plugin Handling:

  10. The script loads and handles plugins dynamically. It retrieves plugin configurations and iterates through each plugin.
  11. For each plugin, it extracts the prefix and settings related to that plugin and processes them similarly to other user-defined settings.
  12. It also handles scheduling for plugins with specific RUN_SCHD settings.

  13. Saving Settings to the Database:

  14. The script clears the existing settings in the database and inserts the updated settings into the database using SQL queries.

  15. Updating the API and Performing Cleanup:

  16. After importing the configurations, the script updates the API to reflect the changes in the settings.
  17. It saves the current timestamp to determine the next import time.
  18. Finally, it logs the successful import of the new configuration.
"},{"location":"SMTP/","title":"\ud83d\udce7 SMTP server guides","text":"

The SMTP plugin supports any SMTP server. Here are some commonly used services to help speed up your configuration.

Note

If you are using a self hosted SMTP server ssh into the container and verify (e.g. via ping) that your server is reachable from within the NetAlertX container. See also how to ssh into the container if you are running it as a Home Assistant addon.

"},{"location":"SMTP/#gmail","title":"Gmail","text":"
  1. Create an app password by following the instructions from Google, you need to Enable 2FA for this to work. https://support.google.com/accounts/answer/185833

  2. Specify the following settings:

    SMTP_RUN='on_notification'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True \n    SMTP_PORT=465\n    SMTP_SERVER='smtp.gmail.com'\n    SMTP_PASS='16-digit passcode from google'\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SMTP/#brevo","title":"Brevo","text":"

Brevo allows for 300 free emails per day as of time of writing.

  1. Create an account on Brevo: https://www.brevo.com/free-smtp-server/
  2. Click your name -> SMTP & API
  3. Click Generate a new SMTP key
  4. Save the details and fill in the NetAlertX settings as below.
SMTP_SERVER='smtp-relay.brevo.com'\nSMTP_PORT=587\nSMTP_SKIP_LOGIN=False\nSMTP_USER='user@email.com'\nSMTP_PASS='xsmtpsib-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxx'\nSMTP_SKIP_TLS=False\nSMTP_FORCE_SSL=False\nSMTP_REPORT_TO='some_target_email@gmail.com'\nSMTP_REPORT_FROM='NetAlertX <user@email.com>'\n
"},{"location":"SMTP/#gmx","title":"GMX","text":"
  1. Go to your GMX account https://account.gmx.com
  2. Under Security Options enable 2FA (Two-factor authentication)
  3. Under Security Options generate an Application-specific password
  4. Home -> Email Settings -> POP3 & IMAP -> Enable access to this account via POP3 and IMAP
  5. In NetAlertX specify these settings:
    SMTP_RUN='on_notification'\n    SMTP_SERVER='mail.gmx.com'\n    SMTP_PORT=465\n    SMTP_USER='gmx_email@gmx.com'\n    SMTP_PASS='<your Application-specific password>'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True\n    SMTP_SKIP_LOGIN=False\n    SMTP_REPORT_FROM='gmx_email@gmx.com' # this has to be the same email as in SMTP_USER\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SUBNETS/","title":"Subnets Configuration","text":"

You need to specify the network interface and the network mask. You can also configure multiple subnets and specify VLANs (see VLAN exceptions below).

ARPSCAN can scan multiple networks if the network allows it. To scan networks directly, the subnets must be accessible from the network where NetAlertX is running. This means NetAlertX needs to have access to the interface attached to that subnet.

Warning

If you don't see all expected devices run the following command in the NetAlertX container (replace the interface and ip mask): sudo arp-scan --interface=eth0 192.168.1.0/24

If this command returns no results, the network is not accessible due to your network or firewall restrictions (Wi-Fi Extenders, VPNs and inaccessible networks). If direct scans are not possible, check the remote networks documentation for workarounds.

"},{"location":"SUBNETS/#example-values","title":"Example Values","text":"

Note

Please use the UI to configure settings as it ensures the config file is in the correct format. Edit app.conf directly only when really necessary.

Tip

When adding more subnets, you may need to increase both the scan interval (ARPSCAN_RUN_SCHD) and the timeout (ARPSCAN_RUN_TIMEOUT)\u2014as well as similar settings for related plugins.

If the timeout is too short, you may see timeout errors in the log. To prevent the application from hanging due to unresponsive plugins, scans are canceled when they exceed the timeout limit.

To fix this: - Reduce the subnet size (e.g., change /16 to /24). - Increase the timeout (e.g., set ARPSCAN_RUN_TIMEOUT to 300 for a 5-minute timeout). - Extend the scan interval (e.g., set ARPSCAN_RUN_SCHD to */10 * * * * to scan every 10 minutes).

For more troubleshooting tips, see Debugging Plugins.

"},{"location":"SUBNETS/#explanation","title":"Explanation","text":""},{"location":"SUBNETS/#network-mask","title":"Network Mask","text":"

Example value: 192.168.1.0/24

The arp-scan time itself depends on the number of IP addresses to check.

The number of IPs to check depends on the network mask you set in the SCAN_SUBNETS setting. For example, a /24 mask results in 256 IPs to check, whereas a /16 mask checks around 65,536 IPs. Each IP takes a couple of seconds, so an incorrect configuration could make arp-scan take hours instead of seconds.

Specify the network filter, which significantly speeds up the scan process. For example, the filter 192.168.1.0/24 covers IP ranges from 192.168.1.0 to 192.168.1.255.

"},{"location":"SUBNETS/#network-interface-adapter","title":"Network Interface (Adapter)","text":"

Example value: --interface=eth0

The adapter will probably be eth0 or eth1. (Check System Info > Network Hardware, or run iwconfig in the container to find your interface name(s)).

Tip

As an alternative to iwconfig, run ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' in your container to find your interface name(s) (e.g.: eth0, eth1): bash Synology-NAS:/# ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' sit0@NONE eth1 eth0

"},{"location":"SUBNETS/#vlans","title":"VLANs","text":"

Example value: --vlan=107

"},{"location":"SUBNETS/#vlans-on-a-hyper-v-setup","title":"VLANs on a Hyper-V Setup","text":"

Community-sourced content by mscreations from this discussion.

Tested Setup: Bare Metal \u2192 Hyper-V on Win Server 2019 \u2192 Ubuntu 22.04 VM \u2192 Docker \u2192 NetAlertX.

Approach 1 (may cause issues): Configure multiple network adapters in Hyper-V with distinct VLANs connected to each one using Hyper-V's network setup. However, this action can potentially lead to the Docker host's inability to handle network traffic correctly. This might interfere with other applications such as Authentik.

Approach 2 (working example):

Network connections to switches are configured as trunk and allow all VLANs access to the server.

By default, Hyper-V only allows untagged packets through to the VM interface, blocking VLAN-tagged packets. To fix this, follow these steps:

  1. Run the following command in PowerShell on the Hyper-V machine:

powershell Set-VMNetworkAdapterVlan -VMName <Docker VM Name> -Trunk -NativeVlanId 0 -AllowedVlanIdList \"<comma separated list of vlans>\"

  1. Within the VM, set up sub-interfaces for each VLAN to enable scanning. On Ubuntu 22.04, Netplan can be used. In /etc/netplan/00-installer-config.yaml, add VLAN definitions:

yaml network: ethernets: eth0: dhcp4: yes vlans: eth0.2: id: 2 link: eth0 addresses: [ \"192.168.2.2/24\" ] routes: - to: 192.168.2.0/24 via: 192.168.1.1

  1. Run sudo netplan apply to activate the interfaces for scanning in NetAlertX.

In this case, use 192.168.2.0/24 --interface=eth0.2 in NetAlertX.

"},{"location":"SUBNETS/#vlan-support-exceptions","title":"VLAN Support & Exceptions","text":"

Please note the accessibility of macvlans when configured on the same computer. This is a general networking behavior, but feel free to clarify via a PR/issue.

"},{"location":"SYNOLOGY_GUIDE/","title":"Installation on a Synology NAS","text":"

There are different ways to install NetAlertX on a Synology, including SSH-ing into the machine and using the command line. For this guide, we will use the Project option in Container manager.

"},{"location":"SYNOLOGY_GUIDE/#create-the-folder-structure","title":"Create the folder structure","text":"

The folders you are creating below will contain the configuration and the database. Back them up regularly.

  1. Create a parent folder named netalertx
  2. Create a db sub-folder

  1. Create a config sub-folder

  1. Note down the folders Locations:

  1. Open Container manager -> Project and click Create.
  2. Fill in the details:

  3. Project name: netalertx

  4. Path: /app_storage/netalertx (will differ from yours)
  5. Paste in the following template:
version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"ghcr.io/jokob-sk/netalertx:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config:/data/config\n      - local/path/db:/data/db      \n      # (optional) useful for debugging if you have issues setting up the container\n      - local/path/logs:/tmp/log\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n

  1. Replace the paths to your volume and comment out unnecessary line(s):

  2. This is only an example, your paths will differ.

 volumes:\n      - /volume1/app_storage/netalertx/config:/data/config\n      - /volume1/app_storage/netalertx/db:/data/db      \n      # (optional) useful for debugging if you have issues setting up the container\n      # - local/path/logs:/tmp/log <- commented out with # \u26a0\n

  1. (optional) Change the port number from 20211 to an unused port if this port is already used.
  2. Build the project:

  1. Navigate to <Synology URL>:20211 (or your custom port).
  2. Read the Subnets and Plugins docs to complete your setup.
"},{"location":"UPDATES/","title":"Docker Update Strategies to upgrade NetAlertX","text":"

Warning

For versions prior to v25.6.7 upgrade to version v25.5.24 first (docker pull ghcr.io/jokob-sk/netalertx:25.5.24) as later versions don't support a full upgrade. Alternatively, devices and settings can be migrated manually, e.g. via CSV import.

This guide outlines approaches for updating Docker containers, usually when upgrading to a newer version of NetAlertX. Each method offers different benefits depending on the situation. Here are the methods:

You can choose any approach that fits your workflow.

In the examples I assume that the container name is netalertx and the image name is netalertx as well.

Note

See also Backup strategies to be on the safe side.

"},{"location":"UPDATES/#1-manual-updates","title":"1. Manual Updates","text":"

Use this method when you need precise control over a single container or when dealing with a broken container that needs immediate attention. Example Commands

To manually update the netalertx container, stop it, delete it, remove the old image, and start a fresh one with docker-compose.

# Stop the container\nsudo docker container stop netalertx\n\n# Remove the container\nsudo docker container rm netalertx\n\n# Remove the old image\nsudo docker image rm netalertx\n\n# Pull and start a new container\nsudo docker-compose up -d\n
"},{"location":"UPDATES/#alternative-force-pull-with-docker-compose","title":"Alternative: Force Pull with Docker Compose","text":"

You can also use --pull always to ensure Docker pulls the latest image before starting the container:

sudo docker-compose up --pull always -d\n
"},{"location":"UPDATES/#2-dockcheck-for-bulk-container-updates","title":"2. Dockcheck for Bulk Container Updates","text":"

Always check the Dockcheck docs if encountering issues with the guide below.

Dockcheck is a useful tool if you have multiple containers to update and some flexibility for handling potential issues that might arise during mass updates. Dockcheck allows you to inspect each container and decide when to update.

"},{"location":"UPDATES/#example-workflow-with-dockcheck","title":"Example Workflow with Dockcheck","text":"

You might use Dockcheck to:

Dockcheck can help streamline bulk updates, especially if you\u2019re managing multiple containers.

Below is a script I use to run an update of the Dockcheck script and start a check for new containers:

cd /path/to/Docker &&\nrm dockcheck.sh &&\nwget https://raw.githubusercontent.com/mag37/dockcheck/main/dockcheck.sh &&\nsudo chmod +x dockcheck.sh &&\nsudo ./dockcheck.sh\n
"},{"location":"UPDATES/#3-automated-updates-with-watchtower","title":"3. Automated Updates with Watchtower","text":"

Always check the watchtower docs if encountering issues with the guide below.

Watchtower monitors your Docker containers and automatically updates them when new images are available. This is ideal for ongoing updates without manual intervention.

"},{"location":"UPDATES/#setting-up-watchtower","title":"Setting Up Watchtower","text":""},{"location":"UPDATES/#1-pull-the-watchtower-image","title":"1. Pull the Watchtower Image:","text":"
docker pull containrrr/watchtower\n
"},{"location":"UPDATES/#2-run-watchtower-to-update-all-images","title":"2. Run Watchtower to update all images:","text":"
docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower \\\n  --interval 300 # Check for updates every 5 minutes\n
"},{"location":"UPDATES/#3-run-watchtower-to-update-only-netalertx","title":"3. Run Watchtower to update only NetAlertX:","text":"

You can specify which containers to monitor by listing them. For example, to monitor netalertx only:

docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower netalertx\n\n
"},{"location":"UPDATES/#4-portainer-controlled-image","title":"4. Portainer controlled image","text":"

This assumes you're using Portainer to manage Docker (or Docker Swarm) and want to pull the latest version of an image and redeploy the container.

Note

"},{"location":"UPDATES/#41-steps-to-update-an-image-in-portainer-standalone-docker","title":"4.1 Steps to Update an Image in Portainer (Standalone Docker)","text":"
  1. Login to Portainer.
  2. Go to \"Containers\" in the left sidebar.
  3. Find the container you want to update, click its name.
  4. Click \"Recreate\" (top right).
  5. Tick: Pull latest image (this ensures Portainer fetches the newest version from Docker Hub or your registry).
  6. Click \"Recreate\" again.
  7. Wait for the container to be stopped, removed, and recreated with the updated image.
"},{"location":"UPDATES/#42-for-docker-swarm-services","title":"4.2 For Docker Swarm Services","text":"

If you're using Docker Swarm (under \"Stacks\" or \"Services\"):

  1. Go to \"Stacks\".
  2. Select the stack managing the container.
  3. Click \"Editor\" (or \"Update the Stack\").
  4. Add a version tag or use :latest if your image tag is latest (not recommended for production).
  5. Click \"Update the Stack\". \u26a0 Portainer will not pull the new image unless the tag changes OR the stack is forced to recreate.
  6. If image tag hasn't changed, go to \"Services\", find the service, and click \"Force Update\".
"},{"location":"UPDATES/#summary","title":"Summary","text":"Method Type Pros Cons Manual CLI Full control, no dependencies Tedious for many containers Dockcheck CLI Script Great for batch updates Needs setup, semi-automated Watchtower Daemonized Fully automated updates Less control, version drift Portainer UI Easy via web interface No auto-updates

These approaches allow you to maintain flexibility in how you update Docker containers, depending on the urgency and scale of the update.

"},{"location":"VERSIONS/","title":"Versions","text":""},{"location":"VERSIONS/#am-i-running-the-latest-released-version","title":"Am I running the latest released version?","text":"

Since version 23.01.14 NetAlertX uses a simple timestamp-based version check to verify if a new version is available. You can check the current and past releases here, or have a look at what I'm currently working on.

If you are not on the latest version, the app will notify you, that a new released version is avialable the following way:

"},{"location":"VERSIONS/#via-email-on-a-notification-event","title":"\ud83d\udce7 Via email on a notification event","text":"

If any notification occurs and an email is sent, the email will contain a note that a new version is available. See the sample email below:

"},{"location":"VERSIONS/#in-the-ui","title":"\ud83c\udd95 In the UI","text":"

In the UI via a notification Icon and via a custom message in the Maintenance section.

For a comparison, this is how the UI looks like if you are on the latest stable image:

"},{"location":"VERSIONS/#implementation-details","title":"Implementation details","text":"

During build a /app/front/buildtimestamp.txt file is created. The app then periodically checks if a new release is available with a newer timestamp in GitHub's rest-based JSON endpoint (check the def isNewVersion: method for details).

"},{"location":"WEBHOOK_N8N/","title":"Webhooks (n8n)","text":""},{"location":"WEBHOOK_N8N/#create-a-simple-n8n-workflow","title":"Create a simple n8n workflow","text":"

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

N8N can be used for more advanced conditional notification use cases. For example, you want only to get notified if two out of a specified list of devices is down. Or you can use other plugins to process the notifiations further. The below is a simple example of sending an email on a webhook.

"},{"location":"WEBHOOK_N8N/#specify-your-email-template","title":"Specify your email template","text":"

See sample JSON if you want to see the JSON paths used in the email template below

Events count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"events\"].length }}\nNew devices count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"new_devices\"].length }}\n
"},{"location":"WEBHOOK_N8N/#get-your-webhook-in-n8n","title":"Get your webhook in n8n","text":""},{"location":"WEBHOOK_N8N/#configure-netalertx-to-point-to-the-above-url","title":"Configure NetAlertX to point to the above URL","text":""},{"location":"WEBHOOK_SECRET/","title":"Webhook Secrets","text":"

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

"},{"location":"WEBHOOK_SECRET/#how-does-the-signing-work","title":"How does the signing work?","text":"

NetAlertX will use the configured secret to create a hash signature of the request body. This SHA256-HMAC signature will appear in the X-Webhook-Signature header of each request to the webhook target URL. You can use the value of this header to validate the request was sent by NetAlertX.

"},{"location":"WEBHOOK_SECRET/#activating-webhook-signatures","title":"Activating webhook signatures","text":"

All you need to do in order to add a signature to the request headers is to set the WEBHOOK_SECRET config value to a non-empty string.

"},{"location":"WEBHOOK_SECRET/#validating-webhook-deliveries","title":"Validating webhook deliveries","text":"

There are a few things to keep in mind when validating the webhook delivery:

"},{"location":"WEBHOOK_SECRET/#testing-the-webhook-payload-validation","title":"Testing the webhook payload validation","text":"

You can use the following secret and payload to verify that your implementation is working correctly.

secret: 'this is my secret'

payload: '{\"test\":\"this is a test body\"}'

If your implementation is correct, the signature you generated should match the following:

signature: bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

X-Webhook-Signature: sha256=bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

"},{"location":"WEBHOOK_SECRET/#more-information","title":"More information","text":"

If you want to learn more about webhook security, take a look at GitHub's webhook documentation.

You can find examples for validating a webhook delivery here.

"},{"location":"WEB_UI_PORT_DEBUG/","title":"Debugging inaccessible UI","text":"

The application uses the following default ports:

The Web UI is served by an nginx server, while the API backend runs on a Flask (Python) server.

"},{"location":"WEB_UI_PORT_DEBUG/#changing-ports","title":"Changing Ports","text":"

For more information, check the Docker installation guide.

"},{"location":"WEB_UI_PORT_DEBUG/#possible-issues-and-troubleshooting","title":"Possible issues and troubleshooting","text":"

Follow all of the below in order to disqualify potential causes of issues and to troubleshoot these problems faster.

"},{"location":"WEB_UI_PORT_DEBUG/#1-port-conflicts","title":"1. Port conflicts","text":"

When opening an issue or debugging:

  1. Include a screenshot of what you see when accessing HTTP://<your rpi IP>/20211 (or your custom port)
  2. Follow steps 1, 2, 3, 4 on this page
  3. Execute the following in the container to see the processes and their ports and submit a screenshot of the result:
  4. sudo apk add lsof
  5. sudo lsof -i
  6. Try running the nginx command in the container:
  7. if you get nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) try using a different port number

"},{"location":"WEB_UI_PORT_DEBUG/#2-javascript-issues","title":"2. JavaScript issues","text":"

Check for browser console (F12 browser dev console) errors + check different browsers.

"},{"location":"WEB_UI_PORT_DEBUG/#3-clear-the-app-cache-and-cached-javascript-files","title":"3. Clear the app cache and cached JavaScript files","text":"

Refresh the browser cache (usually shoft + refresh), try a private window, or different browsers. Please also refresh the app cache by clicking the \ud83d\udd03 (reload) button in the header of the application.

"},{"location":"WEB_UI_PORT_DEBUG/#4-disable-proxies","title":"4. Disable proxies","text":"

If you have any reverse proxy or similar, try disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#5-disable-your-firewall","title":"5. Disable your firewall","text":"

If you are using a firewall, try to temporarily disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#6-post-your-docker-start-details","title":"6. Post your docker start details","text":"

If you haven't, post your docker compose/run command.

"},{"location":"WEB_UI_PORT_DEBUG/#7-check-for-errors-in-your-phpnginx-error-logs","title":"7. Check for errors in your PHP/NGINX error logs","text":"

In the container execute and investigate:

cat /var/log/nginx/error.log

cat /tmp/log/app.php_errors.log

"},{"location":"WEB_UI_PORT_DEBUG/#8-make-sure-permissions-are-correct","title":"8. Make sure permissions are correct","text":"

Tip

You can try to start the container without mapping the /data/config and /data/db dirs and if the UI shows up then the issue is most likely related to your file system permissions or file ownership.

Please read the Permissions troubleshooting guide and provide a screesnhot of the permissions and ownership in the /data/db and app/config directories.

"},{"location":"WORKFLOWS/","title":"Workflows Overview","text":"

The workflows module in allows to automate repetitive tasks, making network management more efficient. Whether you need to assign newly discovered devices to a specific Network Node, auto-group devices from a given vendor, unarchive a device if detected online, or automatically delete devices, this module provides the flexibility to tailor the automations to your needs.

Below are a few examples that demonstrate how this module can be used to simplify network management tasks.

"},{"location":"WORKFLOWS/#updating-workflows","title":"Updating Workflows","text":"

Note

In order to apply a workflow change, you must first Save the changes and then reload the application by clicking Restart server.

"},{"location":"WORKFLOWS/#workflow-components","title":"Workflow components","text":""},{"location":"WORKFLOWS/#triggers","title":"Triggers","text":"

Triggers define the event that activates a workflow. They monitor changes to objects within the system, such as updates to devices or the insertion of new entries. When the specified event occurs, the workflow is executed.

Tip

Workflows not running? Check the Workflows debugging guide how to troubleshoot triggers and conditions.

"},{"location":"WORKFLOWS/#example-trigger","title":"Example Trigger:","text":"

This trigger will activate when a Device object is updated.

"},{"location":"WORKFLOWS/#conditions","title":"Conditions","text":"

Conditions determine whether a workflow should proceed based on certain criteria. These criteria can be set for specific fields, such as whether a device is from a certain vendor, or whether it is new or archived. You can combine conditions using logical operators (AND, OR).

Tip

To better understand how to use specific Device fields, please read through the Database overview guide.

"},{"location":"WORKFLOWS/#example-condition","title":"Example Condition:","text":"

This condition checks if the device's vendor is Google. The workflow will only proceed if the condition is true.

"},{"location":"WORKFLOWS/#actions","title":"Actions","text":"

Actions define the tasks that the workflow will perform once the conditions are met. Actions can include updating fields or deleting devices.

You can include multiple actions that should execute once the conditions are met.

"},{"location":"WORKFLOWS/#example-action","title":"Example Action:","text":"

This action updates the devIsNew field to 0, marking the device as no longer new.

"},{"location":"WORKFLOWS/#examples","title":"Examples","text":"

You can find a couple of configuration examples in Workflow Examples.

Tip

Share your workflows in Discord or GitHub Discussions.

"},{"location":"WORKFLOWS_DEBUGGING/","title":"Workflows debugging and troubleshooting","text":"

Tip

Before troubleshooting, please ensure you have Debugging enabled.

Workflows are triggered by various events. These events are captured and listed in the Integrations -> App Events section of the application.

"},{"location":"WORKFLOWS_DEBUGGING/#troubleshooting-triggers","title":"Troubleshooting triggers","text":"

Note

Workflow events are processed once every 5 seconds. However, if a scan or other background tasks are running, this can cause a delay up to a few minutes.

If an event doesn't trigger a workflow as expected, check the App Events section for the event. You can filter these by the ID of the device (devMAC or devGUID).

Once you find the Event Guid and Object GUID, use them to find relevant debug entries.

Navigate to Mainetenace -> Logs where you can filter the logs based on the Event or Object GUID.

Below you can find some example app.log entries that will help you understand why a Workflow was or was not triggered.

16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Sample Device Update Workflow'\n16:27:03 [WF] self.triggered 'False' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"insert\"}' \n16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Location Change'\n16:27:03 [WF] self.triggered 'True' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"update\"}' \n16:27:03 [WF] Event with GUID '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggered the workflow 'Location Change'\n

Note how one trigger executed, but the other didn't based on different \"event_type\" values. One is \"event_type\": \"insert\", the other \"event_type\": \"update\".

Given the Event is a update event (note ...['online'], ['update'], [None]... in the event structure), the \"event_type\": \"insert\" trigger didn't execute.

"},{"location":"WORKFLOW_EXAMPLES/","title":"Workflow examples","text":"

Workflows in NetAlertX automate actions based on real-time events and conditions. Below are practical examples that demonstrate how to build automation using triggers, conditions, and actions.

"},{"location":"WORKFLOW_EXAMPLES/#example-1-un-archive-devices-if-detected-online","title":"Example 1: Un-archive devices if detected online","text":"

This workflow automatically unarchives a device if it was previously archived but has now been detected as online.

"},{"location":"WORKFLOW_EXAMPLES/#use-case","title":"\ud83d\udccb Use Case","text":"

Sometimes devices are manually archived (e.g., no longer expected on the network), but they reappear unexpectedly. This workflow reverses the archive status when such devices are detected during a scan.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Un-archive devices if detected online\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devIsArchived\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        },\n        {\n          \"field\": \"devPresentLastScan\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsArchived\",\n      \"value\": \"0\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation","title":"\ud83d\udd0d Explanation","text":"
- Trigger: Listens for updates to device records.\n- Conditions:\n    - `devIsArchived` is `1` (archived).\n    - `devPresentLastScan` is `1` (device was detected in the latest scan).\n- Action: Updates the device to set `devIsArchived` to `0` (unarchived).\n
"},{"location":"WORKFLOW_EXAMPLES/#result","title":"\u2705 Result","text":"

Whenever a previously archived device shows up during a network scan, it will be automatically unarchived \u2014 allowing it to reappear in your device lists and dashboards.

Here is your updated version of Example 2 and Example 3, fully aligned with the format and structure of Example 1 for consistency and professionalism:

"},{"location":"WORKFLOW_EXAMPLES/#example-2-assign-device-to-network-node-based-on-ip","title":"Example 2: Assign Device to Network Node Based on IP","text":"

This workflow assigns newly added devices with IP addresses in the 192.168.1.* range to a specific network node with MAC address 6c:6d:6d:6c:6c:6c.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_1","title":"\ud83d\udccb Use Case","text":"

When new devices join your network, assigning them to the correct network node is important for accurate topology and grouping. This workflow ensures devices in a specific subnet are automatically linked to the intended node.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_1","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Assign Device to Network Node Based on IP\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"insert\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devLastIP\",\n          \"operator\": \"contains\",\n          \"value\": \"192.168.1.\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devNetworkNode\",\n      \"value\": \"6c:6d:6d:6c:6c:6c\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_1","title":"\ud83d\udd0d Explanation","text":""},{"location":"WORKFLOW_EXAMPLES/#result_1","title":"\u2705 Result","text":"

New devices with IPs in the 192.168.1.* subnet are automatically assigned to the correct network node, streamlining device organization and reducing manual work.

"},{"location":"WORKFLOW_EXAMPLES/#example-3-mark-device-as-not-new-and-delete-if-from-google-vendor","title":"Example 3: Mark Device as Not New and Delete If from Google Vendor","text":"

This workflow automatically marks newly detected Google devices as not new and deletes them immediately.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_2","title":"\ud83d\udccb Use Case","text":"

You may want to automatically clear out newly detected Google devices (such as Chromecast or Google Home) if they\u2019re not needed in your device database. This workflow handles that clean-up automatically.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_2","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Mark Device as Not New and Delete If from Google Vendor\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devVendor\",\n          \"operator\": \"contains\",\n          \"value\": \"Google\"\n        },\n        {\n          \"field\": \"devIsNew\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsNew\",\n      \"value\": \"0\"\n    },\n    {\n      \"type\": \"delete_device\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_2","title":"\ud83d\udd0d Explanation","text":""},{"location":"WORKFLOW_EXAMPLES/#result_2","title":"\u2705 Result","text":"

Any newly detected Google devices are cleaned up instantly \u2014 first marked as not new, then deleted \u2014 helping you avoid clutter in your device records.

"},{"location":"docker-troubleshooting/excessive-capabilities/","title":"Excessive Capabilities","text":""},{"location":"docker-troubleshooting/excessive-capabilities/#issue-description","title":"Issue Description","text":"

Excessive Linux capabilities are detected beyond the necessary NET_ADMIN, NET_BIND_SERVICE, and NET_RAW. This may indicate overly permissive container configuration.

"},{"location":"docker-troubleshooting/excessive-capabilities/#security-ramifications","title":"Security Ramifications","text":"

While the detected capabilities might not directly harm operation, running with more privileges than necessary increases the attack surface. If the container is compromised, additional capabilities could allow broader system access or privilege escalation.

"},{"location":"docker-troubleshooting/excessive-capabilities/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker configuration grants more capabilities than required for network monitoring. The application only needs specific network-related capabilities for proper function.

"},{"location":"docker-troubleshooting/excessive-capabilities/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Limit capabilities to only those required:

"},{"location":"docker-troubleshooting/excessive-capabilities/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/file-permissions/","title":"File Permission Issues","text":""},{"location":"docker-troubleshooting/file-permissions/#issue-description","title":"Issue Description","text":"

NetAlertX cannot read from or write to critical configuration and database files. This prevents the application from saving data, logs, or configuration changes.

"},{"location":"docker-troubleshooting/file-permissions/#security-ramifications","title":"Security Ramifications","text":"

Incorrect file permissions can expose sensitive configuration data or database contents to unauthorized access. Network monitoring tools handle sensitive information about devices on your network, and improper permissions could lead to information disclosure.

"},{"location":"docker-troubleshooting/file-permissions/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the mounted volumes for configuration and database files don't have proper ownership or permissions set for the netalertx user (UID 20211). The container expects these files to be accessible by the service account, not root or other users.

"},{"location":"docker-troubleshooting/file-permissions/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Fix permissions on the host system for the mounted directories:

"},{"location":"docker-troubleshooting/file-permissions/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/incorrect-user/","title":"Incorrect Container User","text":""},{"location":"docker-troubleshooting/incorrect-user/#issue-description","title":"Issue Description","text":"

NetAlertX is running as UID:GID other than the expected 20211:20211. This bypasses hardened permissions, file ownership, and runtime isolation safeguards.

"},{"location":"docker-troubleshooting/incorrect-user/#security-ramifications","title":"Security Ramifications","text":"

The application is designed with security hardening that depends on running under a dedicated, non-privileged service account. Using a different user account can silently fail future upgrades and removes crucial isolation between the container and host system.

"},{"location":"docker-troubleshooting/incorrect-user/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when you override the container's default user with custom user: directives in docker-compose.yml or --user flags in docker run commands. The container expects to run as the netalertx user for proper security isolation.

"},{"location":"docker-troubleshooting/incorrect-user/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Restore the container to the default user:

"},{"location":"docker-troubleshooting/incorrect-user/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/missing-capabilities/","title":"Missing Network Capabilities","text":""},{"location":"docker-troubleshooting/missing-capabilities/#issue-description","title":"Issue Description","text":"

Raw network capabilities (NET_RAW, NET_ADMIN, NET_BIND_SERVICE) are missing. Tools that rely on these capabilities (e.g., nmap -sS, arp-scan, nbtscan) will not function.

"},{"location":"docker-troubleshooting/missing-capabilities/#security-ramifications","title":"Security Ramifications","text":"

Network scanning and monitoring requires low-level network access that these capabilities provide. Without them, the application cannot perform essential functions like ARP scanning, port scanning, or passive network discovery, severely limiting its effectiveness.

"},{"location":"docker-troubleshooting/missing-capabilities/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the container doesn't have the necessary Linux capabilities granted. Docker containers run with limited capabilities by default, and network monitoring tools need elevated network privileges.

"},{"location":"docker-troubleshooting/missing-capabilities/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Add the required capabilities to your container:

"},{"location":"docker-troubleshooting/missing-capabilities/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/mount-configuration-issues/","title":"Mount Configuration Issues","text":""},{"location":"docker-troubleshooting/mount-configuration-issues/#issue-description","title":"Issue Description","text":"

NetAlertX has detected configuration issues with your Docker volume mounts. These may include write permission problems, data loss risks, or performance concerns marked with \u274c in the table.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#security-ramifications","title":"Security Ramifications","text":"

Improper mount configurations can lead to data loss, performance degradation, or security vulnerabilities. For persistent data (database and configuration), using non-persistent storage like tmpfs can result in complete data loss on container restart. For temporary data, using persistent storage may unnecessarily expose sensitive logs or cache data.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker Compose or run configuration doesn't properly map host directories to container paths, or when the mounted volumes have incorrect permissions. The application requires specific paths to be writable for operation, and some paths should use persistent storage while others should be temporary.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Review and correct your volume mounts in docker-compose.yml:

Example volume configuration:

volumes:\n  - ./data/db:/data/db\n  - ./data/config:/data/config\n  - ./data/log:/tmp/log\n
"},{"location":"docker-troubleshooting/mount-configuration-issues/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/network-mode/","title":"Network Mode Configuration","text":""},{"location":"docker-troubleshooting/network-mode/#issue-description","title":"Issue Description","text":"

NetAlertX is not running with --network=host. Bridge networking blocks passive discovery (ARP, NBNS, mDNS) and active scanning accuracy.

"},{"location":"docker-troubleshooting/network-mode/#security-ramifications","title":"Security Ramifications","text":"

Host networking is required for comprehensive network monitoring. Bridge mode isolates the container from raw network access needed for ARP scanning, passive discovery protocols, and accurate device detection. Without host networking, the application cannot fully monitor your network.

"},{"location":"docker-troubleshooting/network-mode/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker configuration uses bridge networking instead of host networking. Network monitoring requires direct access to the host's network interfaces to perform passive discovery and active scanning.

"},{"location":"docker-troubleshooting/network-mode/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Enable host networking mode:

"},{"location":"docker-troubleshooting/network-mode/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/nginx-configuration-mount/","title":"Nginx Configuration Mount Issues","text":""},{"location":"docker-troubleshooting/nginx-configuration-mount/#issue-description","title":"Issue Description","text":"

You've configured a custom port for NetAlertX, but the required nginx configuration mount is missing or not writable. Without this mount, the container cannot apply your port changes and will fall back to the default port 20211.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#security-ramifications","title":"Security Ramifications","text":"

Running in read-only mode (as recommended) prevents the container from modifying its own nginx configuration. Without a writable mount, custom port configurations cannot be applied, potentially exposing the service on unintended ports or requiring fallback to defaults.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when you set a custom PORT environment variable (other than 20211) but haven't provided a writable mount for nginx configuration. The container needs to write custom nginx config files when running in read-only mode.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

If you want to use a custom port, create a bind mount for the nginx configuration:

If you don't need a custom port, simply omit the PORT environment variable and the container will use 20211 by default.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/port-conflicts/","title":"Port Conflicts","text":""},{"location":"docker-troubleshooting/port-conflicts/#issue-description","title":"Issue Description","text":"

The configured application port (default 20211) or GraphQL API port (default 20212) is already in use by another service. This commonly occurs when you already have another NetAlertX instance running.

"},{"location":"docker-troubleshooting/port-conflicts/#security-ramifications","title":"Security Ramifications","text":"

Port conflicts prevent the application from starting properly, leaving network monitoring services unavailable. Running multiple instances on the same ports can also create configuration confusion and potential security issues if services are inadvertently exposed.

"},{"location":"docker-troubleshooting/port-conflicts/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This error typically occurs when:

"},{"location":"docker-troubleshooting/port-conflicts/#how-to-correct-the-issue","title":"How to Correct the Issue","text":""},{"location":"docker-troubleshooting/port-conflicts/#check-for-existing-netalertx-instances","title":"Check for Existing NetAlertX Instances","text":"

First, check if you already have NetAlertX running:

# Check for running NetAlertX containers\ndocker ps | grep netalertx\n\n# Check for devcontainer processes\nps aux | grep netalertx\n\n# Check what services are using the ports\nnetstat -tlnp | grep :20211\nnetstat -tlnp | grep :20212\n
"},{"location":"docker-troubleshooting/port-conflicts/#stop-conflicting-instances","title":"Stop Conflicting Instances","text":"

If you find another NetAlertX instance:

# Stop specific container\ndocker stop <container_name>\n\n# Stop all NetAlertX containers\ndocker stop $(docker ps -q --filter ancestor=jokob-sk/netalertx)\n\n# Stop devcontainer services\n# Use VS Code command palette: \"Dev Containers: Rebuild Container\"\n
"},{"location":"docker-troubleshooting/port-conflicts/#configure-different-ports","title":"Configure Different Ports","text":"

If you need multiple instances, configure unique ports:

environment:\n  - PORT=20211          # Main application port\n  - GRAPHQL_PORT=20212  # GraphQL API port\n

For a second instance, use different ports:

environment:\n  - PORT=20213          # Different main port\n  - GRAPHQL_PORT=20214  # Different API port\n
"},{"location":"docker-troubleshooting/port-conflicts/#alternative-use-different-container-names","title":"Alternative: Use Different Container Names","text":"

When running multiple instances, use unique container names:

services:\n  netalertx-primary:\n    # ... existing config\n  netalertx-secondary:\n    # ... config with different ports\n
"},{"location":"docker-troubleshooting/port-conflicts/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/read-only-filesystem/","title":"Read-Only Filesystem Mode","text":""},{"location":"docker-troubleshooting/read-only-filesystem/#issue-description","title":"Issue Description","text":"

The container is running as read-write instead of read-only mode. This reduces the security hardening of the appliance.

"},{"location":"docker-troubleshooting/read-only-filesystem/#security-ramifications","title":"Security Ramifications","text":"

Read-only root filesystem is a security best practice that prevents malicious modifications to the container's filesystem. Running read-write allows potential attackers to modify system files or persist malware within the container.

"},{"location":"docker-troubleshooting/read-only-filesystem/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the Docker configuration doesn't mount the root filesystem as read-only. The application is designed as a security appliance that should prevent filesystem modifications.

"},{"location":"docker-troubleshooting/read-only-filesystem/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Enable read-only mode:

"},{"location":"docker-troubleshooting/read-only-filesystem/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/running-as-root/","title":"Running as Root User","text":""},{"location":"docker-troubleshooting/running-as-root/#issue-description","title":"Issue Description","text":"

NetAlertX has detected that the container is running with root privileges (UID 0). This configuration bypasses all built-in security hardening measures designed to protect your system.

"},{"location":"docker-troubleshooting/running-as-root/#security-ramifications","title":"Security Ramifications","text":"

Running security-critical applications like network monitoring tools as root grants unrestricted access to your host system. A successful compromise here could jeopardize your entire infrastructure, including other containers, host services, and potentially your network.

"},{"location":"docker-troubleshooting/running-as-root/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This typically occurs when you've explicitly overridden the container's default user in your Docker configuration, such as using user: root or --user 0:0 in docker-compose.yml or docker run commands. The application is designed to run under a dedicated, non-privileged service account for security.

"},{"location":"docker-troubleshooting/running-as-root/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Switch to the dedicated 'netalertx' user by removing any custom user directives:

After making these changes, restart the container. The application will automatically adjust ownership of required directories.

"},{"location":"docker-troubleshooting/running-as-root/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"}]}