{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"NetAlertX Documentation","text":"

Welcome to the official NetAlertX documentation! NetAlertX is a powerful tool designed to simplify the management and monitoring of your network. Below, you will find guides and resources to help you set up, configure, and troubleshoot your NetAlertX instance.

"},{"location":"#in-app-help","title":"In-App Help","text":"

NetAlertX provides contextual help within the application:

"},{"location":"#installation-guides","title":"Installation Guides","text":"

The app can be installed different ways, with the best support of the docker-based deployments. This includes the Home Assistant and Unraid installation approaches. See details below.

"},{"location":"#docker-fully-supported","title":"Docker (Fully Supported)","text":"

NetAlertX is fully supported in Docker environments, allowing for easy setup and configuration. Follow the official guide to get started:

This guide will take you through the process of setting up NetAlertX using Docker Compose or standalone Docker commands.

"},{"location":"#home-assistant-fully-supported","title":"Home Assistant (Fully Supported)","text":"

You can install NetAlertX also as a Home Assistant addon via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

"},{"location":"#unraid-partial-support","title":"Unraid (Partial Support)","text":"

The Unraid template was created by the community, so it's only partially supported. Alternatively, here is another version of the Unraid template.

"},{"location":"#bare-metal-installation-experimental","title":"Bare-Metal Installation (Experimental)","text":"

If you prefer to run NetAlertX on your own hardware, you can try the experimental bare-metal installation. Please note that this method is still under development, and are looking for maintainers to help improve it.

"},{"location":"#help-and-support","title":"Help and Support","text":"

If you need help or run into issues, here are some resources to guide you:

Before opening an issue, please:

Need more help? Join the community discussions or submit a support request:

"},{"location":"#contributing","title":"Contributing","text":"

NetAlertX is open-source and welcomes contributions from the community! If you'd like to help improve the software, please follow the guidelines below:

For more information on contributing, check out our Dev Guide.

"},{"location":"#stay-updated","title":"Stay Updated","text":"

To keep up with the latest changes and updates to NetAlertX, please refer to the following resources:

Make sure to follow the project on GitHub to get notifications for new releases and important updates.

"},{"location":"#additional-info","title":"Additional info","text":"

If you have any suggestions or improvements, please don\u2019t hesitate to contribute!

NetAlertX is actively maintained. You can find the source code, report bugs, or request new features on our GitHub page.

"},{"location":"API/","title":"NetAlertX API Documentation","text":"

This API provides programmatic access to devices, events, sessions, metrics, network tools, and sync in NetAlertX. It is implemented as a REST and GraphQL server. All requests require authentication via API Token (API_TOKEN setting) unless explicitly noted.

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n

It runs on 0.0.0.0:<graphql_port> with CORS enabled for all main endpoints.

"},{"location":"API/#authentication","title":"Authentication","text":"

All endpoints require an API token provided in the HTTP headers:

Authorization: Bearer <API_TOKEN>\n

If the token is missing or invalid, the server will return:

{ \"error\": \"Forbidden\" }\n
"},{"location":"API/#base-url","title":"Base URL","text":"
http://<server>:<GRAPHQL_PORT>/\n
"},{"location":"API/#endpoints","title":"Endpoints","text":"

See Testing for example requests and usage.

"},{"location":"API/#notes","title":"Notes","text":""},{"location":"API_DEVICE/","title":"Device API Endpoints","text":"

Manage a single device by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICE/#1-retrieve-device-details","title":"1. Retrieve Device Details","text":"

Special case: mac=new returns a template for a new device with default values.

Response (success):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\",\n  \"devName\": \"Net - Huawei\",\n  \"devOwner\": \"Admin\",\n  \"devType\": \"Router\",\n  \"devVendor\": \"Huawei\",\n  \"devStatus\": \"On-line\",\n  \"devSessions\": 12,\n  \"devEvents\": 5,\n  \"devDownAlerts\": 1,\n  \"devPresenceHours\": 32,\n  \"devChildrenDynamic\": [...],\n  \"devChildrenNicsDynamic\": [...],\n  ...\n}\n

Error Responses:

"},{"location":"API_DEVICE/#2-update-device-fields","title":"2. Update Device Fields","text":"

Request Body:

{\n  \"devName\": \"New Device\",\n  \"devOwner\": \"Admin\",\n  \"createNew\": true\n}\n

Behavior:

Response:

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#3-delete-a-device","title":"3. Delete a Device","text":"

Response:

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#4-delete-all-events-for-a-device","title":"4. Delete All Events for a Device","text":"

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#5-reset-device-properties","title":"5. Reset Device Properties","text":"

Request Body: Optional JSON for additional parameters.

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#6-copy-device-data","title":"6. Copy Device Data","text":"

Request Body:

{\n  \"macFrom\": \"AA:BB:CC:DD:EE:FF\",\n  \"macTo\": \"11:22:33:44:55:66\"\n}\n

Response:

{\n  \"success\": true,\n  \"message\": \"Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66\"\n}\n

Error Responses:

"},{"location":"API_DEVICE/#7-update-a-single-column","title":"7. Update a Single Column","text":"

Request Body:

{\n  \"columnName\": \"devName\",\n  \"columnValue\": \"Updated Device Name\"\n}\n

Response (success):

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#example-curl-requests","title":"Example curl Requests","text":"

Get Device Details:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Update Device Fields:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devName\": \"New Device Name\"}'\n

Delete Device:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Copy Device Data:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/copy\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macFrom\":\"AA:BB:CC:DD:EE:FF\",\"macTo\":\"11:22:33:44:55:66\"}'\n

Update Single Column:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/update-column\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"columnName\":\"devName\",\"columnValue\":\"Updated Device\"}'\n
"},{"location":"API_DEVICES/","title":"Devices Collection API Endpoints","text":"

The Devices Collection API provides operations to retrieve, manage, import/export, and filter devices in bulk. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICES/#endpoints","title":"Endpoints","text":""},{"location":"API_DEVICES/#1-get-all-devices","title":"1. Get All Devices","text":"

Response (success):

{\n  \"success\": true,\n  \"devices\": [\n    {\n      \"devName\": \"Net - Huawei\",\n      \"devMAC\": \"AA:BB:CC:DD:EE:FF\",\n      \"devIP\": \"192.168.1.1\",\n      \"devType\": \"Router\",\n      \"devFavorite\": 0,\n      \"devStatus\": \"online\"\n    },\n    ...\n  ]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#2-delete-devices-by-mac","title":"2. Delete Devices by MAC","text":"

Request Body:

{\n  \"macs\": [\"AA:BB:CC:DD:EE:FF\", \"11:22:33:*\"]\n}\n

Behavior:

Response:

{\n  \"success\": true,\n  \"deleted_count\": 5\n}\n

Error Responses:

"},{"location":"API_DEVICES/#3-delete-devices-with-empty-macs","title":"3. Delete Devices with Empty MACs","text":"

Response:

{\n  \"success\": true,\n  \"deleted\": 3\n}\n
"},{"location":"API_DEVICES/#4-delete-unknown-devices","title":"4. Delete Unknown Devices","text":"

Response:

{\n  \"success\": true,\n  \"deleted\": 2\n}\n
"},{"location":"API_DEVICES/#5-export-devices","title":"5. Export Devices","text":"

Query Parameter / URL Parameter:

CSV Response:

JSON Response:

{\n  \"data\": [\n    { \"devName\": \"Net - Huawei\", \"devMAC\": \"AA:BB:CC:DD:EE:FF\", ... },\n    ...\n  ],\n  \"columns\": [\"devName\", \"devMAC\", \"devIP\", \"devType\", \"devFavorite\", \"devStatus\"]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#6-import-devices-from-csv","title":"6. Import Devices from CSV","text":"

Request Body (multipart file or JSON with content field):

{\n  \"content\": \"<base64-encoded CSV content>\"\n}\n

Response:

{\n  \"success\": true,\n  \"inserted\": 25,\n  \"skipped_lines\": [3, 7]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#7-get-device-totals","title":"7. Get Device Totals","text":"

Response:

[ \n  120,    // Total devices\n  85,     // Connected\n  5,      // Favorites\n  10,     // New\n  8,      // Down\n  12      // Archived\n]\n

Order: [all, connected, favorites, new, down, archived]

"},{"location":"API_DEVICES/#8-get-devices-by-status","title":"8. Get Devices by Status","text":"

Query Parameter:

Response (success):

[\n  { \"id\": \"AA:BB:CC:DD:EE:FF\", \"title\": \"Net - Huawei\", \"favorite\": 0 },\n  { \"id\": \"11:22:33:44:55:66\", \"title\": \"\u2605 USG Firewall\", \"favorite\": 1 }\n]\n

If devFavorite=1, the title is prepended with a star \u2605.

"},{"location":"API_DEVICES/#example-curl-requests","title":"Example curl Requests","text":"

Get All Devices:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Devices by MAC:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macs\":[\"AA:BB:CC:DD:EE:FF\",\"11:22:33:*\"]}'\n

Export Devices CSV:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/export?format=csv\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Import Devices from CSV:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/devices/import\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -F \"file=@devices.csv\"\n

Get Devices by Status:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/by-status?status=online\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_EVENTS/","title":"Events API Endpoints","text":"

The Events API provides access to device event logs, allowing creation, retrieval, deletion, and summary of events over time.

"},{"location":"API_EVENTS/#endpoints","title":"Endpoints","text":""},{"location":"API_EVENTS/#1-create-event","title":"1. Create Event","text":"

Request Body (JSON):

{\n  \"ip\": \"192.168.1.10\",\n  \"event_type\": \"Device Down\",\n  \"additional_info\": \"Optional info about the event\",\n  \"pending_alert\": 1,\n  \"event_time\": \"2025-08-24T12:00:00Z\"\n}\n

Response (JSON):

{\n  \"success\": true,\n  \"message\": \"Event created for 00:11:22:33:44:55\"\n}\n
"},{"location":"API_EVENTS/#2-get-events","title":"2. Get Events","text":"
/events?mac=<mac>\n

Response:

{\n  \"success\": true,\n  \"events\": [\n    {\n      \"eve_MAC\": \"00:11:22:33:44:55\",\n      \"eve_IP\": \"192.168.1.10\",\n      \"eve_DateTime\": \"2025-08-24T12:00:00Z\",\n      \"eve_EventType\": \"Device Down\",\n      \"eve_AdditionalInfo\": \"\",\n      \"eve_PendingAlertEmail\": 1\n    }\n  ]\n}\n
"},{"location":"API_EVENTS/#3-delete-events","title":"3. Delete Events","text":"

Response:

{\n  \"success\": true,\n  \"message\": \"Deleted events older than 30 days\"\n}\n
"},{"location":"API_EVENTS/#4-event-totals-over-a-period","title":"4. Event Totals Over a Period","text":"

Query Parameters:

Parameter Description period Time period for totals, e.g., \"7 days\", \"1 month\", \"1 year\", \"100 years\"

Sample Response (JSON Array):

[120, 85, 5, 10, 3, 7]\n

Meaning of Values:

  1. Total events in the period
  2. Total sessions
  3. Missing sessions
  4. Voided events (eve_EventType LIKE 'VOIDED%')
  5. New device events (eve_EventType LIKE 'New Device')
  6. Device down events (eve_EventType LIKE 'Device Down')
"},{"location":"API_EVENTS/#notes","title":"Notes","text":"
{ \"error\": \"Forbidden\" }\n
"},{"location":"API_EVENTS/#example-curl-requests","title":"Example curl Requests","text":"

Create Event:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/events/create/00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\n    \"ip\": \"192.168.1.10\",\n    \"event_type\": \"Device Down\",\n    \"additional_info\": \"Power outage\",\n    \"pending_alert\": 1\n  }'\n

Get Events for a Device:

curl \"http://<server_ip>:<GRAPHQL_PORT>/events?mac=00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Events Older Than 30 Days:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/events/30\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Get Event Totals for 7 Days:

curl \"http://<server_ip>:<GRAPHQL_PORT>/sessions/totals?period=7 days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_GRAPHQL/","title":"GraphQL API Endpoint","text":"

GraphQL queries are read-optimized for speed. Data may be slightly out of date until the file system cache refreshes.

"},{"location":"API_GRAPHQL/#endpoints","title":"Endpoints","text":""},{"location":"API_GRAPHQL/#sample-query","title":"Sample Query","text":"
query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n

See also: Debugging GraphQL issues

"},{"location":"API_GRAPHQL/#curl-example","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_GRAPHQL/#query-parameters","title":"Query Parameters","text":"Parameter Description page Page number of results to fetch. limit Number of results per page. sort Sorting options (field = field name, order = asc or desc). search Term to filter devices. status Filter devices by status: my_devices, connected, favorites, new, down, archived, offline."},{"location":"API_GRAPHQL/#notes-on-curl","title":"Notes on curl","text":""},{"location":"API_GRAPHQL/#sample-response","title":"Sample Response","text":"
{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        },\n        {\n          \"rowid\": 2,\n          \"devMac\": \"66:77:88:99:AA:BB\",\n          \"devName\": \"Device 2\",\n          \"devOwner\": \"Owner 2\",\n          \"devType\": \"Type 2\",\n          \"devVendor\": \"Vendor 2\",\n          \"devLastConnection\": \"2025-01-02T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_METRICS/","title":"Metrics API Endpoint","text":"

The /metrics endpoint exposes Prometheus-compatible metrics for NetAlertX, including aggregate device counts and per-device status.

"},{"location":"API_METRICS/#endpoint-details","title":"Endpoint Details","text":""},{"location":"API_METRICS/#example-output","title":"Example Output","text":"
netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_METRICS/#metrics-overview","title":"Metrics Overview","text":""},{"location":"API_METRICS/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"Metric Description netalertx_connected_devices Devices currently connected netalertx_offline_devices Devices currently offline netalertx_down_devices Down/unreachable devices netalertx_new_devices Recently detected devices netalertx_archived_devices Archived devices netalertx_favorite_devices User-marked favorites netalertx_my_devices Devices associated with the current user"},{"location":"API_METRICS/#2-per-device-status","title":"2. Per-Device Status","text":"

Metric: netalertx_device_status Each device has labels:

Metric value is always 1 (presence indicator).

"},{"location":"API_METRICS/#querying-with-curl","title":"Querying with curl","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace placeholders:

"},{"location":"API_METRICS/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_METRICS/#grafana-dashboard-template","title":"Grafana Dashboard Template","text":"

Sample template JSON: Download

"},{"location":"API_NETTOOLS/","title":"Net Tools API Endpoints","text":"

The Net Tools API provides network diagnostic utilities, including Wake-on-LAN, traceroute, speed testing, DNS resolution, nmap scanning, and internet connection information.

All endpoints require authorization via Bearer token.

"},{"location":"API_NETTOOLS/#endpoints","title":"Endpoints","text":""},{"location":"API_NETTOOLS/#1-wake-on-lan","title":"1. Wake-on-LAN","text":"

Request Body (JSON):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"message\": \"WOL packet sent\",\n  \"output\": \"Sent magic packet to AA:BB:CC:DD:EE:FF\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#2-traceroute","title":"2. Traceroute","text":"

Request Body:

{\n  \"devLastIP\": \"192.168.1.1\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": \"traceroute output as string\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#3-speedtest","title":"3. Speedtest","text":"

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Ping: 15 ms\",\n    \"Download: 120.5 Mbit/s\",\n    \"Upload: 22.4 Mbit/s\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#4-dns-lookup-nslookup","title":"4. DNS Lookup (nslookup)","text":"

Request Body:

{\n  \"devLastIP\": \"8.8.8.8\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Server: 8.8.8.8\",\n    \"Address: 8.8.8.8#53\",\n    \"Name: google-public-dns-a.google.com\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#5-nmap-scan","title":"5. Nmap Scan","text":"

Request Body:

{\n  \"scan\": \"192.168.1.0/24\",\n  \"mode\": \"fast\"\n}\n

Supported Modes:

Mode nmap Arguments fast -F normal default detail -A skipdiscovery -Pn

Response (success):

{\n  \"success\": true,\n  \"mode\": \"fast\",\n  \"ip\": \"192.168.1.0/24\",\n  \"output\": [\n    \"Starting Nmap 7.91\",\n    \"Host 192.168.1.1 is up\",\n    \"... scan results ...\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#6-internet-connection-info","title":"6. Internet Connection Info","text":"

Response (success):

{\n  \"success\": true,\n  \"output\": \"IP: 203.0.113.5 City: Sydney Country: AU Org: Example ISP\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#example-curl-requests","title":"Example curl Requests","text":"

Wake-on-LAN:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/wakeonlan\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devMac\":\"AA:BB:CC:DD:EE:FF\"}'\n

Traceroute:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/traceroute\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"192.168.1.1\"}'\n

Speedtest:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/speedtest\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Nslookup:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nslookup\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"8.8.8.8\"}'\n

Nmap Scan:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nmap\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"scan\":\"192.168.1.0/24\",\"mode\":\"fast\"}'\n

Internet Info:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/internetinfo\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_OLD/","title":"API endpoints","text":"

Note

Some of these endpoints will be deprecated soon. Please refere to the new API endpoints docs for details on the new API layer.

NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the API_TOKEN settings as authorization bearer, for example:

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#api-endpoint-graphql","title":"API Endpoint: GraphQL","text":""},{"location":"API_OLD/#example-query-to-fetch-devices","title":"Example Query to Fetch Devices","text":"

First, let's define the GraphQL query to fetch devices with pagination and sorting options.

query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n

See also: Debugging GraphQL issues

"},{"location":"API_OLD/#curl-command","title":"curl Command","text":"

You can use the following curl command to execute the query.

curl 'http://host:GRAPHQL_PORT/graphql'   -X POST   -H 'Authorization: Bearer API_TOKEN'  -H 'Content-Type: application/json'   --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#explanation","title":"Explanation:","text":"
  1. GraphQL Query:
  2. The query parameter contains the GraphQL query as a string.
  3. The variables parameter contains the input variables for the query.

  4. Query Variables:

  5. page: Specifies the page number of results to fetch.
  6. limit: Specifies the number of results per page.
  7. sort: Specifies the sorting options, with field being the field to sort by and order being the sort order (asc for ascending or desc for descending).
  8. search: A search term to filter the devices.
  9. status: The status filter to apply (valid values are my_devices (determined by the UI_MY_DEVICES setting), connected, favorites, new, down, archived, offline).

  10. curl Command:

  11. The -X POST option specifies that we are making a POST request.
  12. The -H \"Content-Type: application/json\" option sets the content type of the request to JSON.
  13. The -d option provides the request payload, which includes the GraphQL query and variables.
"},{"location":"API_OLD/#sample-response","title":"Sample Response","text":"

The response will be in JSON format, similar to the following:

{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        },\n        {\n          \"rowid\": 2,\n          \"devMac\": \"66:77:88:99:AA:BB\",\n          \"devName\": \"Device 2\",\n          \"devOwner\": \"Owner 2\",\n          \"devType\": \"Type 2\",\n          \"devVendor\": \"Vendor 2\",\n          \"devLastConnection\": \"2025-01-02T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_OLD/#api-endpoint-json-files","title":"API Endpoint: JSON files","text":"

This API endpoint retrieves static files, that are periodically updated.

"},{"location":"API_OLD/#when-are-the-endpoints-updated","title":"When are the endpoints updated","text":"

The endpoints are updated when objects in the API endpoints are changed.

"},{"location":"API_OLD/#location-of-the-endpoints","title":"Location of the endpoints","text":"

In the container, these files are located under the /app/api/ folder. You can access them via the /php/server/query_json.php?file=user_notifications.json endpoint.

"},{"location":"API_OLD/#available-endpoints","title":"Available endpoints","text":"

You can access the following files:

File name Description notification_json_final.json The json version of the last notification (e.g. used for webhooks - sample JSON). table_devices.json All of the available Devices detected by the app. table_plugins_events.json The list of the unprocessed (pending) notification events (plugins_events DB table). table_plugins_history.json The list of notification events history. table_plugins_objects.json The content of the plugins_objects table. Find more info on the Plugin system here language_strings.json The content of the language_strings table, which in turn is loaded from the plugins config.json definitions. table_custom_endpoint.json A custom endpoint generated by the SQL query specified by the API_CUSTOM_SQL setting. table_settings.json The content of the settings table. app_state.json Contains the current application state."},{"location":"API_OLD/#json-data-format","title":"JSON Data format","text":"

The endpoints starting with the table_ prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:

{\n  \"data\": [\n        {\n          \"db_column_name\": \"data\",\n          \"db_column_name2\": \"data2\"      \n        }, \n        {\n          \"db_column_name\": \"data3\",\n          \"db_column_name2\": \"data4\" \n        }\n    ]\n}\n\n

Example JSON of the table_devices.json endpoint with two Devices (database rows):

{\n  \"data\": [\n        {\n          \"devMac\": \"Internet\",\n          \"devName\": \"Net - Huawei\",\n          \"devType\": \"Router\",\n          \"devVendor\": null,\n          \"devGroup\": \"Always on\",\n          \"devFirstConnection\": \"2021-01-01 00:00:00\",\n          \"devLastConnection\": \"2021-01-28 22:22:11\",\n          \"devLastIP\": \"192.168.1.24\",\n          \"devStaticIP\": 0,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2023-01-28 22:22:28.998715\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"\",\n          \"devParentPort\": \"\",\n          \"devIcon\": \"globe\"\n        }, \n        {\n          \"devMac\": \"a4:8f:ff:aa:ba:1f\",\n          \"devName\": \"Net - USG\",\n          \"devType\": \"Firewall\",\n          \"devVendor\": \"Ubiquiti Inc\",\n          \"devGroup\": \"\",\n          \"devFirstConnection\": \"2021-02-12 22:05:00\",\n          \"devLastConnection\": \"2021-07-17 15:40:00\",\n          \"devLastIP\": \"192.168.1.1\",\n          \"devStaticIP\": 1,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2021-07-17 15:40:10.667717\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"Internet\",\n          \"devParentPort\": 1,\n          \"devIcon\": \"shield-halved\"\n      }\n    ]\n}\n\n
"},{"location":"API_OLD/#api-endpoint-prometheus-exporter","title":"API Endpoint: Prometheus Exporter","text":""},{"location":"API_OLD/#example-output-of-the-metrics-endpoint","title":"Example Output of the /metrics Endpoint","text":"

Below is a representative snippet of the metrics you may find when querying the /metrics endpoint for netalertx. It includes both aggregate counters and device_status labels per device.

netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_OLD/#metrics-explanation","title":"Metrics Explanation","text":""},{"location":"API_OLD/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"

Metric names prefixed with netalertx_ provide aggregated counts by device status:

These numeric values give a high-level overview of device distribution.

"},{"location":"API_OLD/#2-perdevice-status-with-labels","title":"2. Per\u2011Device Status with Labels","text":"

Each individual device is represented by a netalertx_device_status metric, with descriptive labels:

The metric value is always 1 (indicating presence or active state) and the combination of labels identifies the device.

"},{"location":"API_OLD/#how-to-query-with-curl","title":"How to Query with curl","text":"

To fetch the metrics from the NetAlertX exporter:

curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace:

"},{"location":"API_OLD/#summary","title":"Summary","text":""},{"location":"API_OLD/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_OLD/#grafana-template","title":"Grafana template","text":"

Grafana template sample: Download json

"},{"location":"API_OLD/#api-endpoint-log-files","title":"API Endpoint: /log files","text":"

This API endpoint retrieves files from the /app/log folder.

File Description IP_changes.log Logs of IP address changes app.log Main application log app.php_errors.log PHP error log app_front.log Frontend application log app_nmap.log Logs of Nmap scan results db_is_locked.log Logs when the database is locked execution_queue.log Logs of execution queue activities plugins/ Directory for temporary plugin-related files (not accessible) report_output.html HTML report output report_output.json JSON format report output report_output.txt Text format report output stderr.log Logs of standard error output stdout.log Logs of standard output"},{"location":"API_OLD/#api-endpoint-config-files","title":"API Endpoint: /config files","text":"

To retrieve files from the /app/config folder.

File Description devices.csv Devices csv file app.conf Application config file"},{"location":"API_ONLINEHISTORY/","title":"Online History API Endpoints","text":"

Manage the online history records of devices. Currently, the API supports deletion of all history entries. All endpoints require authorization.

"},{"location":"API_ONLINEHISTORY/#1-delete-online-history","title":"1. Delete Online History","text":"

Response (success):

{\n  \"success\": true,\n  \"message\": \"Deleted online history\"\n}\n

Error Responses:

"},{"location":"API_ONLINEHISTORY/#example-curl-request","title":"Example curl Request","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/history\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_ONLINEHISTORY/#implementation-details","title":"Implementation Details","text":"

The endpoint calls the helper function delete_online_history():

def delete_online_history():\n    \"\"\"Delete all online history activity\"\"\"\n\n    conn = get_temp_db_connection()\n    cur = conn.cursor()\n\n    # Remove all entries from Online_History table\n    cur.execute(\"DELETE FROM Online_History\")\n\n    conn.commit()\n    conn.close()\n\n    return jsonify({\"success\": True, \"message\": \"Deleted online history\"})\n
"},{"location":"API_SESSIONS/","title":"SEssions API Endpoints","text":"

Track device connection sessions.

json { \"mac\": \"AA:BB:CC:DD:EE:FF\", \"ip\": \"192.168.1.10\", \"start_time\": \"2025-08-01T10:00:00\" } * DELETE /sessions/delete \u2192 Delete session by MAC

json { \"mac\": \"AA:BB:CC:DD:EE:FF\" } * GET /sessions/list?mac=<mac>&start_date=2025-08-01&end_date=2025-08-21 \u2192 List sessions * GET /sessions/calendar?start=2025-08-01&end=2025-08-21 \u2192 Calendar view of sessions * GET /sessions/<mac>?period=1 day \u2192 Sessions for a device * GET /sessions/session-events?type=all&period=7 days \u2192 Session events summary

"},{"location":"API_SYNC/","title":"Sync API Endpoint","text":"

The /sync endpoint is used by the SYNC plugin to synchronize data between multiple NetAlertX instances (e.g., from a node to a hub). It supports both GET and POST requests.

"},{"location":"API_SYNC/#91-get-sync","title":"9.1 GET /sync","text":"

Fetches data from a node to the hub. The data is returned as a base64-encoded JSON file.

Example Request:

curl 'http://<server>:<GRAPHQL_PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>'\n

Response Example:

{\n  \"node_name\": \"NODE-01\",\n  \"status\": 200,\n  \"message\": \"OK\",\n  \"data_base64\": \"eyJkZXZpY2VzIjogW3siZGV2TWFjIjogIjAwOjExOjIyOjMzOjQ0OjU1IiwiZGV2TmFtZSI6ICJEZXZpY2UgMSJ9XSwgImNvdW50Ijog1fQ==\",\n  \"timestamp\": \"2025-08-24T10:15:00+10:00\"\n}\n

Notes:

"},{"location":"API_SYNC/#92-post-sync","title":"9.2 POST /sync","text":"

Used by a node to send data to the hub. The hub receives form-encoded data and stores it for processing.

Required Form Fields:

Field Description data The payload (plain text or JSON) node_name Name of the node sending the data plugin The plugin name generating the data

Example Request (cURL):

curl -X POST 'http://<server>:<GRAPHQL_PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -F 'data=<payload here>' \\\n  -F 'node_name=NODE-01' \\\n  -F 'plugin=SYNC'\n

Response Example:

{\n  \"message\": \"Data received and stored successfully\"\n}\n

Storage Details:

last_result.<plugin>.encoded.<node_name>.<sequence>.log\n
"},{"location":"API_SYNC/#93-notes-and-best-practices","title":"9.3 Notes and Best Practices","text":""},{"location":"API_TESTS/","title":"Tests","text":""},{"location":"API_TESTS/#unit-tests","title":"Unit Tests","text":"

Warning

Please note these test modify data in the database.

  1. See the /test directory for available test cases. These are not exhaustive but cover the main API endpoints.
  2. To run a test case, SSH into the container: sudo docker exec -it netalertx /bin/bash
  3. Inside the container, install pytest (if not already installed): pip install pytest
  4. Run a specific test case: pytest /app/test/TESTFILE.py
"},{"location":"AUTHELIA/","title":"Authelia","text":""},{"location":"AUTHELIA/#authelia-support","title":"Authelia support","text":"

Warning

This is community contributed content and work in progress. Contributions are welcome.

theme: dark\n\ndefault_2fa_method: \"totp\"\n\nserver:\n  address: 0.0.0.0:9091\n  endpoints:\n    enable_expvars: false\n    enable_pprof: false\n    authz:\n      forward-auth:\n        implementation: 'ForwardAuth'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      ext-authz:\n        implementation: 'ExtAuthz'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      auth-request:\n        implementation: 'AuthRequest'\n        authn_strategies:\n          - name: 'HeaderAuthRequestProxyAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      legacy:\n        implementation: 'Legacy'\n        authn_strategies:\n          - name: 'HeaderLegacy'\n          - name: 'CookieSession'\n  disable_healthcheck: false\n  tls:\n    key: \"\"\n    certificate: \"\"\n    client_certificates: []\n  headers:\n    csp_template: \"\"\n\nlog:\n  ## Level of verbosity for logs: info, debug, trace.\n  level: info\n\n###############################################################\n# The most important section\n###############################################################\naccess_control:\n  ## Default policy can either be 'bypass', 'one_factor', 'two_factor' or 'deny'.\n  default_policy: deny\n  networks:\n    - name: internal\n      networks:\n        - '192.168.0.0/18'\n        - '10.10.10.0/8' # Zerotier\n    - name: private\n      networks:\n        - '172.16.0.0/12'\n  rules:\n    - networks:\n        - private\n      domain:\n        - '*'\n      policy: bypass\n    - networks:\n        - internal\n      domain:\n        - '*'\n      policy: bypass\n    - domain:\n        # exclude itself from auth, should not happen as we use Traefik middleware on a case-by-case screnario\n        - 'auth.MYDOMAIN1.TLD'\n        - 'authelia.MYDOMAIN1.TLD'\n        - 'auth.MYDOMAIN2.TLD'\n        - 'authelia.MYDOMAIN2.TLD'\n      policy: bypass\n    - domain:\n        #All subdomains match\n        - 'MYDOMAIN1.TLD'\n        - '*.MYDOMAIN1.TLD'\n      policy: two_factor\n    - domain:\n        # This will not work yet as Authelio does not support multi-domain authentication\n        - 'MYDOMAIN2.TLD'\n        - '*.MYDOMAIN2.TLD'\n      policy: two_factor\n\n\n############################################################\nidentity_validation:\n  reset_password:\n    jwt_secret: \"[REDACTED]\"\n\nidentity_providers:\n  oidc:\n    enable_client_debug_messages: true\n    enforce_pkce: public_clients_only\n    hmac_secret: [REDACTED]\n    lifespans:\n      authorize_code: 1m\n      id_token: 1h\n      refresh_token: 90m\n      access_token: 1h\n    cors:\n      endpoints:\n        - authorization\n        - token\n        - revocation\n        - introspection\n        - userinfo\n      allowed_origins:\n        - \"*\"\n      allowed_origins_from_client_redirect_uris: false\n    jwks:\n      - key: [REDACTED]\n        certificate_chain:\n    clients:\n      - client_id: portainer\n        client_name: Portainer\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_post'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://portainer.MYDOMAIN1.LTD\n        userinfo_signed_response_alg: none\n\n      - client_id: openproject\n        client_name: OpenProject\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_basic'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://op.MYDOMAIN.TLD\n        #grant_types:\n        #  - refresh_token\n        #  - authorization_code\n        #response_types:\n        #  - code\n        #response_modes:\n        #  - form_post\n        #  - query\n        #  - fragment\n        userinfo_signed_response_alg: none\n##################################################################\n\n\ntelemetry:\n  metrics:\n    enabled: false\n    address: tcp://0.0.0.0:9959\n\ntotp:\n  disable: false\n  issuer: authelia.com\n  algorithm: sha1\n  digits: 6\n  period: 30 ## The period in seconds a one-time password is valid for.\n  skew: 1\n  secret_size: 32\n\nwebauthn:\n  disable: false\n  timeout: 60s ## Adjust the interaction timeout for Webauthn dialogues.\n  display_name: Authelia\n  attestation_conveyance_preference: indirect\n  user_verification: preferred\n\nntp:\n  address: \"pool.ntp.org\"\n  version: 4\n  max_desync: 5s\n  disable_startup_check: false\n  disable_failure: false\n\nauthentication_backend:\n  password_reset:\n    disable: false\n    custom_url: \"\"\n  refresh_interval: 5m\n  file:\n    path: /config/users_database.yml\n    watch: true\n    password:\n      algorithm: argon2\n      argon2:\n        variant: argon2id\n        iterations: 3\n        memory: 65536\n        parallelism: 4\n        key_length: 32\n        salt_length: 16\n\npassword_policy:\n  standard:\n    enabled: false\n    min_length: 8\n    max_length: 0\n    require_uppercase: true\n    require_lowercase: true\n    require_number: true\n    require_special: true\n  ## zxcvbn is a well known and used password strength algorithm. It does not have tunable settings.\n  zxcvbn:\n    enabled: false\n    min_score: 3\n\nregulation:\n  max_retries: 3\n  find_time: 2m\n  ban_time: 5m\n\nsession:\n  name: authelia_session\n  secret: [REDACTED]\n  expiration: 60m\n  inactivity: 15m\n  cookies:\n    - domain: 'MYDOMAIN1.LTD'\n      authelia_url: 'https://auth.MYDOMAIN1.LTD'\n      name: 'authelia_session'\n      default_redirection_url: 'https://MYDOMAIN1.LTD'\n    - domain: 'MYDOMAIN2.LTD'\n      authelia_url: 'https://auth.MYDOMAIN2.LTD'\n      name: 'authelia_session_other'\n      default_redirection_url: 'https://MYDOMAIN2.LTD'\n\nstorage:\n  encryption_key: [REDACTED]\n  local:\n    path: /config/db.sqlite3\n\nnotifier:\n  disable_startup_check: true\n  smtp:\n    address: MYOTHERDOMAIN.LTD:465\n    timeout: 5s\n    username: \"USER@DOMAIN\"\n    password: \"[REDACTED]\"\n    sender: \"Authelia <postmaster@MYOTHERDOMAIN.LTD>\"\n    identifier: NAME@MYOTHERDOMAIN.LTD\n    subject: \"[Authelia] {title}\"\n    startup_check_address: postmaster@MYOTHERDOMAIN.LTD\n\n
"},{"location":"BACKUPS/","title":"Backing things up","text":"

Note

To backup 99% of your configuration backup at least the /app/config folder. Please read the whole page (or at least \"Scenario 2: Corrupted database\") for details. Note that database definitions might change over time. The safest way is to restore your older backups into the same version of the app they were taken from and then gradually upgarde between releases to the latest version.

There are 4 artifacts that can be used to backup the application:

File Description Limitations /db/app.db Database file(s) The database file might be in an uncommitted state or corrupted /config/app.conf Configuration file Can be overridden with the APP_CONF_OVERRIDE env variable. /config/devices.csv CSV file containing device information Doesn't contain historical data /config/workflows.json A JSON file containing your workflows N/A"},{"location":"BACKUPS/#backup-strategies","title":"Backup strategies","text":"

The safest approach to backups is to backup everything, by taking regular file system backups of the /db and /config folders (I use Kopia).

Arguably, the most time is spent setting up the device list, so if only one file is kept I'd recommend to have a latest backup of the devices_<timestamp>.csv or devices.csv file, followed by the app.conf and workflows.json files. You can also download app.conf and devices.csv file in the Maintenance section:

"},{"location":"BACKUPS/#scenario-1-full-backup","title":"Scenario 1: Full backup","text":"

End-result: Full restore

"},{"location":"BACKUPS/#source-artifacts","title":"\ud83d\udcbe Source artifacts:","text":""},{"location":"BACKUPS/#recovery","title":"\ud83d\udce5 Recovery:","text":"

To restore the application map the above files as described in the Setup documentation.

"},{"location":"BACKUPS/#scenario-2-corrupted-database","title":"Scenario 2: Corrupted database","text":"

End-result: Partial restore (historical data and some plugin data will be missing)

"},{"location":"BACKUPS/#source-artifacts_1","title":"\ud83d\udcbe Source artifacts:","text":""},{"location":"BACKUPS/#recovery_1","title":"\ud83d\udce5 Recovery:","text":"

Even with a corrupted database you can recover what I would argue is 99% of the configuration.

"},{"location":"BACKUPS/#data-and-backup-storage","title":"Data and backup storage","text":"

To decide on a backup strategy, check where the data is stored:

"},{"location":"BACKUPS/#core-configuration","title":"Core Configuration","text":"

The core application configuration is in the app.conf file (See Settings System for details), such as:

"},{"location":"BACKUPS/#core-device-data","title":"Core Device Data","text":"

The core device data is backed up to the devices_<timestamp>.csv or devices.csv file via the CSV Backup CSVBCKP Plugin. This file contains data, such as:

"},{"location":"BACKUPS/#historical-data","title":"Historical data","text":"

Historical data is stored in the app.db database (See Database overview for details). This data includes:

"},{"location":"COMMON_ISSUES/","title":"Common issues","text":""},{"location":"COMMON_ISSUES/#loading","title":"Loading...","text":"

Often if the application is misconfigured the Loading... dialog is continuously displayed. This is most likely caused by the backed failing to start. The Maintenance -> Logs section should give you more details on what's happening. If there is no exception, check the Portainer log, or start the container in the foreground (without the -d parameter) to observe any exceptions. It's advisable to enable trace or debug. Check the Debug tips on detailed instructions.

"},{"location":"COMMON_ISSUES/#incorrect-scan_subnets","title":"Incorrect SCAN_SUBNETS","text":"

One of the most common issues is not configuring SCAN_SUBNETS correctly. If this setting is misconfigured you will only see one or two devices in your devices list after a scan. Please read the subnets docs carefully to resolve this.

"},{"location":"COMMON_ISSUES/#duplicate-devices-and-notifications","title":"Duplicate devices and notifications","text":"

The app uses the MAC address as an unique identifier for devices. If a new MAC is detected a new device is added to the application and corresponding notifications are triggered. This means that if the MAC of an existing device changes, the device will be logged as a new device. You can usually prevent this from happening by changing the device configuration (in Android, iOS, or Windows) for your network. See the Random Macs guide for details.

"},{"location":"COMMON_ISSUES/#permissions","title":"Permissions","text":"

Make sure you File permissions are set correctly.

"},{"location":"COMMON_ISSUES/#container-restarts-crashes","title":"Container restarts / crashes","text":""},{"location":"COMMON_ISSUES/#unable-to-resolve-host","title":"unable to resolve host","text":""},{"location":"COMMON_ISSUES/#invalid-json","title":"Invalid JSON","text":"

Check the Invalid JSON errors debug help docs on how to proceed.

"},{"location":"COMMON_ISSUES/#sudo-execution-failing-eg-on-arpscan-on-a-raspberry-pi-4","title":"sudo execution failing (e.g.: on arpscan) on a Raspberry Pi 4","text":"

sudo: unexpected child termination condition: 0

Resolution based on this issue

wget ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.3-2_armhf.deb\nsudo dpkg -i libseccomp2_2.5.3-2_armhf.deb\n

The link above will probably break in time too. Go to https://packages.debian.org/sid/armhf/libseccomp2/download to find the new version number and put that in the url.

"},{"location":"COMMON_ISSUES/#only-router-and-own-device-show-up","title":"Only Router and own device show up","text":"

Make sure that the subnet and interface in SCAN_SUBNETS are correct. If your device/NAS has multiple ethernet ports, you probably need to change eth0 to something else.

"},{"location":"COMMON_ISSUES/#losing-my-settings-and-devices-after-an-update","title":"Losing my settings and devices after an update","text":"

If you lose your devices and/or settings after an update that means you don't have the /app/db and /app/config folders mapped to a permanent storage. That means every time you update these folders are re-created. Make sure you have the volumes specified correctly in your docker-compose.yml or run command.

"},{"location":"COMMON_ISSUES/#the-application-is-slow","title":"The application is slow","text":"

Slowness is usually caused by incorrect settings (the app might restart, so check the app.log), too many background processes (disable unnecessary scanners), too long scans (limit the number of scanned devices), too many disk operations, or some maintenance plugins might have failed. See the Performance tips docs for details.

"},{"location":"COMMUNITY_GUIDES/","title":"Community Guides","text":"

Use the official installation guides at first and use community content as supplementary material. Open an issue or PR if you'd like to add your link to the list \ud83d\ude4f (Ordered by last update time)

"},{"location":"CUSTOM_PROPERTIES/","title":"Custom Properties for Devices","text":""},{"location":"CUSTOM_PROPERTIES/#overview","title":"Overview","text":"

This functionality allows you to define custom properties for devices, which can store and display additional information on the device listing page. By marking properties as \"Show\", you can enhance the user interface with quick actions, notes, or external links.

"},{"location":"CUSTOM_PROPERTIES/#key-features","title":"Key Features:","text":""},{"location":"CUSTOM_PROPERTIES/#defining-custom-properties","title":"Defining Custom Properties","text":"

Custom properties are structured as a list of objects, where each property includes the following fields:

Field Description CUSTPROP_icon The icon (Base64-encoded HTML) displayed for the property. CUSTPROP_type The action type (e.g., show_notes, link, delete_dev). CUSTPROP_name A short name or title for the property. CUSTPROP_args Arguments for the action (e.g., URL or modal text). CUSTPROP_notes Additional notes or details displayed when applicable. CUSTPROP_show A boolean to control visibility (true to show on the listing page)."},{"location":"CUSTOM_PROPERTIES/#available-action-types","title":"Available Action Types","text":""},{"location":"CUSTOM_PROPERTIES/#usage-on-the-device-listing-page","title":"Usage on the Device Listing Page","text":"

Visible properties (CUSTPROP_show: true) are displayed as interactive icons in the device listing. Each icon can perform one of the following actions based on the CUSTPROP_type:

  1. Modals (e.g., Show Notes):
  2. Displays detailed information in a popup modal.
  3. Example: Firmware version details.

  4. Links:

  5. Redirect to an external or internal URL.
  6. Example: Open a device's documentation or external site.

  7. Device Actions:

  8. Manage devices with actions like delete.
  9. Example: Quickly remove a device from the network.

  10. Plugins:

  11. Future placeholder for running custom plugin scripts.
  12. Note: Not implemented yet.
"},{"location":"CUSTOM_PROPERTIES/#example-use-cases","title":"Example Use Cases","text":"
  1. Device Documentation Link:
  2. Add a custom property with CUSTPROP_type set to link or link_new_tab to allow quick navigation to the external documentation of the device.

  3. Firmware Details:

  4. Use CUSTPROP_type: show_notes to display firmware versions or upgrade instructions in a modal.

  5. Device Removal:

  6. Enable device removal functionality using CUSTPROP_type: delete_dev.
"},{"location":"CUSTOM_PROPERTIES/#notes","title":"Notes","text":"

This feature provides a flexible way to enhance device management and display with interactive elements tailored to your needs.

"},{"location":"DATABASE/","title":"A high-level description of the database structure","text":"

An overview of the most important database tables as well as an detailed overview of the Devices table. The MAC address is used as a foreign key in most cases.

"},{"location":"DATABASE/#devices-database-table","title":"Devices database table","text":"Field Name Description Sample Value devMac MAC address of the device. 00:1A:2B:3C:4D:5E devName Name of the device. iPhone 12 devOwner Owner of the device. John Doe devType Type of the device (e.g., phone, laptop, etc.). If set to a network type (e.g., switch), it will become selectable as a Network Parent Node. Laptop devVendor Vendor/manufacturer of the device. Apple devFavorite Whether the device is marked as a favorite. 1 devGroup Group the device belongs to. Home Devices devComments User comments or notes about the device. Used for work purposes devFirstConnection Timestamp of the device's first connection. 2025-03-22 12:07:26+11:00 devLastConnection Timestamp of the device's last connection. 2025-03-22 12:07:26+11:00 devLastIP Last known IP address of the device. 192.168.1.5 devStaticIP Whether the device has a static IP address. 0 devScan Whether the device should be scanned. 1 devLogEvents Whether events related to the device should be logged. 0 devAlertEvents Whether alerts should be generated for events. 1 devAlertDown Whether an alert should be sent when the device goes down. 0 devSkipRepeated Whether to skip repeated alerts for this device. 1 devLastNotification Timestamp of the last notification sent for this device. 2025-03-22 12:07:26+11:00 devPresentLastScan Whether the device was present during the last scan. 1 devIsNew Whether the device is marked as new. 0 devLocation Physical or logical location of the device. Living Room devIsArchived Whether the device is archived. 0 devParentMAC MAC address of the parent device (if applicable) to build the Network Tree. 00:1A:2B:3C:4D:5F devParentPort Port of the parent device to which this device is connected. Port 3 devIcon Icon representing the device. The value is a base64-encoded SVG or Font Awesome HTML tag. PHN2ZyB... devGUID Unique identifier for the device. a2f4b5d6-7a8c-9d10-11e1-f12345678901 devSite Site or location where the device is registered. Office devSSID SSID of the Wi-Fi network the device is connected to. HomeNetwork devSyncHubNode The NetAlertX node ID used for synchronization between NetAlertX instances. node_1 devSourcePlugin Source plugin that discovered the device. ARPSCAN devCustomProps Custom properties related to the device. The value is a base64-encoded JSON object. PHN2ZyB... devFQDN Fully qualified domain name. raspberrypi.local devParentRelType The type of relationship between the current device and it's parent node. By default, selecting nic will hide it from lists. nic devReqNicsOnline If all NICs are required to be online to mark teh current device online. 0

To understand how values of these fields influuence application behavior, such as Notifications or Network topology, see also:

"},{"location":"DATABASE/#other-tables-overview","title":"Other Tables overview","text":"Table name Description Sample data CurrentScan Result of the current scan Devices The main devices database that also contains the Network tree mappings. If ScanCycle is set to 0 device is not scanned. Events Used to collect connection/disconnection events. Online_History Used to display the Device presence chart Parameters Used to pass values between the frontend and backend. Plugins_Events For capturing events exposed by a plugin via the last_result.log file. If unique then saved into the Plugins_Objects table. Entries are deleted once processed and stored in the Plugins_History and/or Plugins_Objects tables. Plugins_History History of all entries from the Plugins_Events table Plugins_Language_Strings Language strings collected from the plugin config.json files used for string resolution in the frontend. Plugins_Objects Unique objects detected by individual plugins. Sessions Used to display sessions in the charts Settings Database representation of the sum of all settings from app.conf and plugins coming from config.json files."},{"location":"DEBUG_GRAPHQL/","title":"Debugging GraphQL server issues","text":"

The GraphQL server is an API middle layer, running on it's own port specified by GRAPHQL_PORT, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the API documentation for details.

The most common issue is that the GraphQL server doesn't start properly, usually due to a port conflict. If you are running multiple NetAlertX instances, make sure to use unique ports by changing the GRAPHQL_PORT setting. The default is 20212.

"},{"location":"DEBUG_GRAPHQL/#how-to-update-the-graphql_port-in-case-of-issues","title":"How to update the GRAPHQL_PORT in case of issues","text":"

As a first troubleshooting step try changing the default GRAPHQL_PORT setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.

"},{"location":"DEBUG_GRAPHQL/#updating-the-setting-via-the-settings-ui","title":"Updating the setting via the Settings UI","text":"

Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:

You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The API_TOKEN is used to authenticate any API calls, including GraphQL requests.

"},{"location":"DEBUG_GRAPHQL/#updating-the-appconf-file","title":"Updating the app.conf file","text":"

If the UI is not accessible, you can directly edit the app.conf file in your /config folder:

"},{"location":"DEBUG_GRAPHQL/#using-a-docker-variable","title":"Using a docker variable","text":"

All application settings can also be initialized via the APP_CONF_OVERRIDE docker env variable.

...\n environment:\n      - TZ=Europe/Berlin      \n      - PORT=20213\n      - APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"}\n...\n
"},{"location":"DEBUG_GRAPHQL/#how-to-check-the-graphql-server-is-running","title":"How to check the GraphQL server is running?","text":"

There are several ways to check if the GraphQL server is running.

"},{"location":"DEBUG_GRAPHQL/#init-check","title":"Init Check","text":"

You can navigate to Maintenance -> Init Check to see if isGraphQLServerRunning is ticked:

"},{"location":"DEBUG_GRAPHQL/#checking-the-logs","title":"Checking the Logs","text":"

You can navigate to Maintenance -> Logs and search for graphql to see if it started correctly and serving requests:

"},{"location":"DEBUG_GRAPHQL/#inspecting-the-browser-console","title":"Inspecting the Browser console","text":"

In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).

You can then inspect any of the POST requests by opening them in a new tab.

"},{"location":"DEBUG_INVALID_JSON/","title":"How to debug the Invalid JSON response error","text":"

Check the the HTTP response of the failing backend call by following these steps:

For reference, the above queries should return results in the following format:

"},{"location":"DEBUG_INVALID_JSON/#first-url","title":"First URL:","text":""},{"location":"DEBUG_INVALID_JSON/#second-url","title":"Second URL:","text":""},{"location":"DEBUG_INVALID_JSON/#third-url","title":"Third URL:","text":"

You can copy and paste any JSON result (result of the First and Third query) into an online JSON checker, such as this one to check if it's valid.

"},{"location":"DEBUG_PHP/","title":"Debugging backend PHP issues","text":""},{"location":"DEBUG_PHP/#logs-in-ui","title":"Logs in UI","text":"

You can view recent backend PHP errors directly in the Maintenance > Logs section of the UI. This provides quick access to logs without needing terminal access.

"},{"location":"DEBUG_PHP/#accessing-logs-directly","title":"Accessing logs directly","text":"

Sometimes, the UI might not be accessible. In that case, you can access the logs directly inside the container.

"},{"location":"DEBUG_PHP/#step-by-step","title":"Step-by-step:","text":"
  1. Open a shell into the container:

bash docker exec -it netalertx /bin/sh

  1. Check the NGINX error log:

bash cat /var/log/nginx/error.log

  1. Check the PHP application error log:

bash cat /app/log/app.php_errors.log

These logs will help identify syntax issues, fatal errors, or startup problems when the UI fails to load properly.

"},{"location":"DEBUG_PLUGINS/","title":"Troubleshooting plugins","text":""},{"location":"DEBUG_PLUGINS/#high-level-overview","title":"High-level overview","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/).

For a more in-depth overview on how plugins work check the Plugins development docs.

"},{"location":"DEBUG_PLUGINS/#prerequisites","title":"Prerequisites","text":""},{"location":"DEBUG_PLUGINS/#potential-issues","title":"Potential issues","text":""},{"location":"DEBUG_PLUGINS/#incorrect-input-data","title":"Incorrect input data","text":"

Input data from the plugin might cause mapping issues in specific edge cases. Look for a corresponding section in the app.log file, for example notice the first line of the execution run of the PIHOLE plugin below:

17:31:05 [Scheduler] - Scheduler run for PIHOLE: YES\n17:31:05 [Plugin utils] ---------------------------------------------\n17:31:05 [Plugin utils] display_name: PiHole (Device sync)\n17:31:05 [Plugins] CMD: SELECT n.hwaddr AS Object_PrimaryID, {s-quote}null{s-quote} AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, {s-quote}null{s-quote} AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE {s-quote}ip-%{s-quote} AND n.hwaddr is not {s-quote}00:00:00:00:00:00{s-quote}  AND na.ip is not null\n17:31:05 [Plugins] setTyp: subnets\n17:31:05 [Plugin utils] Flattening the below array\n17:31:05 ['192.168.1.0/24 --interface=eth1']\n17:31:05 [Plugin utils] isinstance(arr, list) : False | isinstance(arr, str) : True\n17:31:05 [Plugins] Resolved value: 192.168.1.0/24 --interface=eth1\n17:31:05 [Plugins] Convert to Base64: True\n17:31:05 [Plugins] base64 value: b'MTkyLjE2OC4xLjAvMjQgLS1pbnRlcmZhY2U9ZXRoMQ=='\n17:31:05 [Plugins] Timeout: 10\n17:31:05 [Plugins] Executing: SELECT n.hwaddr AS Object_PrimaryID, 'null' AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, 'null' AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE 'ip-%' AND n.hwaddr is not '00:00:00:00:00:00'  AND na.ip is not null\n\ud83d\udd3b\n17:31:05 [Plugins] SUCCESS, received 2 entries\n17:31:05 [Plugins] sqlParam entries: [(0, 'PIHOLE', '01:01:01:01:01:01', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'not-processed', 'null', 'null', '01:01:01:01:01:01'), (0, 'PIHOLE', '02:42:ac:1e:00:02', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'not-processed', 'null', 'null', '02:42:ac:1e:00:02')]\n17:31:05 [Plugins] Processing        : PIHOLE\n17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status \"watched-not-changed\" \n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"missing-in-last-scan\" \n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"watched-not-changed\" \n17:31:05 [Plugins] Mapping objects to database table: CurrentScan\n17:31:05 [Plugins] SQL query for mapping: INSERT into CurrentScan ( \"cur_MAC\", \"cur_IP\", \"cur_LastQuery\", \"cur_Name\", \"cur_Vendor\", \"cur_ScanMethod\") VALUES ( ?, ?, ?, ?, ?, ?)\n17:31:05 [Plugins] SQL sqlParams for mapping: [('01:01:01:01:01:01', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'PIHOLE'), ('02:42:ac:1e:00:02', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'PIHOLE')]\n\ud83d\udd3a\n17:31:05 [API] Update API starting\n17:31:06 [API] Updating table_plugins_history.json file in /api\n

The debug output between the \ud83d\udd3bred arrows\ud83d\udd3a is important for debugging (arrows added only to highlight the section on this page, they are not available in the actual debug log)

In the above output notice the section logging how many events are produced by the plugin:

17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n

These values, if formatted correctly, will also show up in the UI:

"},{"location":"DEBUG_PLUGINS/#sharing-application-state","title":"Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/","title":"Debugging and troubleshooting","text":"

Please follow tips 1 - 4 to get a more detailed error.

"},{"location":"DEBUG_TIPS/#1-more-logging","title":"1. More Logging","text":"

When debugging an issue always set the highest log level:

LOG_LEVEL='trace'

"},{"location":"DEBUG_TIPS/#2-surfacing-errors-when-container-restarts","title":"2. Surfacing errors when container restarts","text":"

Start the container via the terminal with a command similar to this one:

docker run --rm --network=host \\\n  -v local/path/netalertx/config:/app/config \\\n  -v local/path/netalertx/db:/app/db \\\n  -e TZ=Europe/Berlin \\\n  -e PORT=20211 \\\n  ghcr.io/jokob-sk/netalertx:latest\n\n

\u26a0 Please note, don't use the -d parameter so you see the error when the container crashes. Use this error in your issue description.

"},{"location":"DEBUG_TIPS/#3-check-the-_dev-image-and-open-issues","title":"3. Check the _dev image and open issues","text":"

If possible, check if your issue got fixed in the _dev image before opening a new issue. The container is:

ghcr.io/jokob-sk/netalertx-dev:latest

\u26a0 Please backup your DB and config beforehand!

Please also search open issues.

"},{"location":"DEBUG_TIPS/#4-disable-restart-behavior","title":"4. Disable restart behavior","text":"

To prevent a Docker container from automatically restarting in a Docker Compose file, specify the restart policy as no:

version: '3'\n\nservices:\n  your-service:\n    image: your-image:tag\n    restart: no\n    # Other service configurations...\n
"},{"location":"DEBUG_TIPS/#5-sharing-application-state","title":"5. Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/#common-issues","title":"Common issues","text":"

See Common issues for details.

"},{"location":"DEVICES_BULK_EDITING/","title":"Editing multiple devices at once","text":"

NetAlertX allows you to mass-edit devices via a CSV export and import feature, or directly in the UI.

"},{"location":"DEVICES_BULK_EDITING/#ui-multi-edit","title":"UI multi edit","text":"

Note

Make sure you have your backups saved and restorable before doing any mass edits. Check Backup strategies.

You can select devices in the Devices view by selecting devices to edit and then clicking the Multi-edit button or via the Maintenance > Multi-Edit section.

"},{"location":"DEVICES_BULK_EDITING/#csv-bulk-edit","title":"CSV bulk edit","text":"

The database and device structure may change with new releases. When using the CSV import functionality, ensure the format matches what the application expects. To avoid issues, you can first export the devices and review the column formats before importing any custom data.

Note

As always, backup everything, just in case.

  1. In Maintenance > Backup / Restore click the CSV Export button.
  2. A devices.csv is generated in the /config folder
  3. Edit the devices.csv file however you like.

Note

The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: <your netalertx url>/php/server/devices.php?action=ExportCSV or via the CSV Backup plugin. (\ud83d\udca1 You can schedule this)

"},{"location":"DEVICES_BULK_EDITING/#file-encoding-format","title":"File encoding format","text":"

Note

Keep Linux line endings (suggested editors: Nano, Notepad++)

"},{"location":"DEVICE_DISPLAY_SETTINGS/","title":"Device Display Settings","text":"

This set of settings allows you to group Devices under different views. The Archived toggle allows you to exclude a Device from most listings and notifications.

"},{"location":"DEVICE_DISPLAY_SETTINGS/#status-colors","title":"Status Colors","text":"
  1. \ud83d\udd0c Online (Green) = A device that is no longer marked as a \"New Device\".
  2. \ud83d\udd0c New (Green) = A newly discovered device that is online and is still marked as a \"New Device\".
  3. \u2716 New (Grey) = Same as No.2 but device is now offline.
  4. \u2716 Offline (Grey) = A device that was not detected online in the last scan.
  5. \u26a0 Down (Red) = A device that has \"Alert Down\" marked and has been offline for the time set in the Setting NTFPRCS_alert_down_time.

See also Notification guide.

"},{"location":"DEVICE_HEURISTICS/","title":"Device Heuristics: Icon and Type Guessing","text":"

This module is responsible for inferring the most likely device type and icon based on minimal identifying data like MAC address, vendor, IP, or device name.

It does this using a set of heuristics defined in an external JSON rules file, which it evaluates in priority order.

Note

You can find the full source code of the heuristics module in the device_heuristics.py file.

"},{"location":"DEVICE_HEURISTICS/#json-rule-format","title":"JSON Rule Format","text":"

Rules are defined in a file called device_heuristics_rules.json (located under /back), structured like:

[\n  {\n    \"dev_type\": \"Phone\",\n    \"icon_html\": \"<i class=\\\"fa-brands fa-apple\\\"></i>\",\n    \"matching_pattern\": [\n      { \"mac_prefix\": \"001A79\", \"vendor\": \"Apple\" }\n    ],\n    \"name_pattern\": [\"iphone\", \"pixel\"]\n  }\n]\n

Note

Feel free to raise a PR in case you'd like to add any rules into the device_heuristics_rules.json file. Please place new rules into the correct position and consider the priority of already available rules.

"},{"location":"DEVICE_HEURISTICS/#supported-fields","title":"Supported fields:","text":"Field Type Description dev_type string Type to assign if rule matches (e.g. \"Gateway\", \"Phone\") icon_html string Icon (HTML string) to assign if rule matches. Encoded to base64 at load time. matching_pattern array List of { mac_prefix, vendor } objects for first strict and then loose matching name_pattern array (optional) List of lowercase substrings (used with regex) ip_pattern array (optional) Regex patterns to match IPs

Order in this array defines priority \u2014 rules are checked top-down and short-circuit on first match.

"},{"location":"DEVICE_HEURISTICS/#matching-flow-in-priority-order","title":"Matching Flow (in Priority Order)","text":"

The function guess_device_attributes(...) runs a series of matching functions in strict order:

  1. MAC + Vendor \u2192 match_mac_and_vendor()
  2. Vendor only \u2192 match_vendor()
  3. Name pattern \u2192 match_name()
  4. IP pattern \u2192 match_ip()
  5. Final fallback \u2192 defaults defined in the NEWDEV_devIcon and NEWDEV_devType settings.

Note

The app will try guessing the device type or icon if devType or devIcon are \"\" or \"null\".

"},{"location":"DEVICE_HEURISTICS/#use-of-default-values","title":"Use of default values","text":"

The guessing process runs for every device as long as the current type or icon still matches the default values. Even if earlier heuristics return a match, the system continues evaluating additional clues \u2014 like name or IP \u2014 to try and replace placeholders.

# Still considered a match attempt if current values are defaults\nif (not type_ or type_ == default_type) or (not icon or icon == default_icon):\n    type_, icon = match_ip(ip, default_type, default_icon)\n

In other words: if the type or icon is still \"unknown\" (or matches the default), the system assumes the match isn\u2019t final \u2014 and keeps looking. It stops only when both values are non-default (defaults are defined in the NEWDEV_devIcon and NEWDEV_devType settings).

"},{"location":"DEVICE_HEURISTICS/#match-behavior-per-function","title":"Match Behavior (per function)","text":"

These functions are executed in the following order:

"},{"location":"DEVICE_HEURISTICS/#match_mac_and_vendormac_clean-vendor","title":"match_mac_and_vendor(mac_clean, vendor, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_vendorvendor","title":"match_vendor(vendor, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_namename","title":"match_name(name, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_ipip","title":"match_ip(ip, ...)","text":""},{"location":"DEVICE_HEURISTICS/#icons","title":"Icons","text":"

TL;DR: Type and icon must both be matched. If only one is matched, the other falls back to the default.

"},{"location":"DEVICE_HEURISTICS/#priority-mechanics","title":"Priority Mechanics","text":""},{"location":"DEVICE_MANAGEMENT/","title":"NetAlertX - Device Management","text":"

The Main Info section is where most of the device identifiable information is stored and edited. Some of the information is autodetected via various plugins. Initial values for most of the fields can be specified in the NEWDEV plugin.

Note

You can multi-edit devices by selecting them in the main Devices view, from the Mainetence section, or via the CSV Export functionality under Maintenance. More info can be found in the Devices Bulk-editing docs.

"},{"location":"DEVICE_MANAGEMENT/#main-info","title":"Main Info","text":"

Note

Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.

"},{"location":"DEVICE_MANAGEMENT/#dummy-devices","title":"Dummy devices","text":"

You can create dummy devices from the Devices listing screen.

The MAC field and the Last IP field will then become editable.

Note

You can couple this with the ICMP plugin which can be used to monitor the status of these devices, if they are actual devices reachable with the ping command. If not, you can use a loopback IP address so they appear online, such as 0.0.0.0 or 127.0.0.1.

"},{"location":"DEVICE_MANAGEMENT/#copying-data-from-an-existing-device","title":"Copying data from an existing device.","text":"

To speed up device population you can also copy data from an existing device. This can be done from the Tools tab on the Device details.

"},{"location":"DEV_ENV_SETUP/","title":"Development Environment Setup","text":"

I truly appreciate all contributions! To help keep this project maintainable, this guide provides an overview of project priorities, key design considerations, and overall philosophy. It also includes instructions for setting up your environment so you can start contributing right away.

"},{"location":"DEV_ENV_SETUP/#development-guidelines","title":"Development Guidelines","text":"

Before starting development, please review the following guidelines.

"},{"location":"DEV_ENV_SETUP/#priority-order-highest-to-lowest","title":"Priority Order (Highest to Lowest)","text":"
  1. \ud83d\udd3c Fixing core bugs that lack workarounds
  2. \ud83d\udd35 Adding core functionality that unlocks other features (e.g., plugins)
  3. \ud83d\udd35 Refactoring to enable faster development
  4. \ud83d\udd3d UI improvements (PRs welcome, but low priority)
"},{"location":"DEV_ENV_SETUP/#design-philosophy","title":"Design Philosophy","text":"

The application architecture is designed for extensibility and maintainability. It relies heavily on configuration manifests via plugins and settings to dynamically build the UI and populate the application with data from various sources.

For details, see: - Plugins Development (includes video) - Settings System

Focus on core functionality and integrate with existing tools rather than reinventing the wheel.

Examples: - Using Apprise for notifications instead of implementing multiple separate gateways - Implementing regex-based validation instead of one-off validation for each setting

Note

UI changes have lower priority. PRs are welcome, but please keep them small and focused.

"},{"location":"DEV_ENV_SETUP/#development-environment-set-up","title":"Development Environment Set Up","text":"

The following steps will guide you to set up your environment for local development and to run a custom docker build on your system. For most changes the container doesn't need to be rebuild which speeds up the development significantly.

Note

Replace /development with the path where your code files will be stored. The default container name is netalertx so there might be a conflict with your running containers.

"},{"location":"DEV_ENV_SETUP/#1-download-the-code","title":"1. Download the code:","text":""},{"location":"DEV_ENV_SETUP/#2-create-a-dev-env_dev-file","title":"2. Create a DEV .env_dev file","text":"

touch /development/.env_dev && sudo nano /development/.env_dev

The file content should be following, with your custom values.

#--------------------------------\n#NETALERTX\n#--------------------------------\nTZ=Europe/Berlin\nPORT=22222    # make sure this port is unique on your whole network\nDEV_LOCATION=/development/NetAlertX\nAPP_DATA_LOCATION=/volume/docker_appdata\n# Make sure your GRAPHQL_PORT setting has a port that is unique on your whole host network\nAPP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"22223\"} \n# ALWAYS_FRESH_INSTALL=true # uncommenting this will always delete the content of /config and /db dirs on boot to simulate a fresh install\n
"},{"location":"DEV_ENV_SETUP/#3-create-db-and-config-dirs","title":"3. Create /db and /config dirs","text":"

Create a folder netalertx in the APP_DATA_LOCATION (in this example in /volume/docker_appdata) with 2 subfolders db and config.

"},{"location":"DEV_ENV_SETUP/#4-run-the-container","title":"4. Run the container","text":"

You can then modify the python script without restarting/rebuilding the container every time. Additionally, you can trigger a plugin run via the UI:

"},{"location":"DEV_ENV_SETUP/#tips","title":"Tips","text":"

A quick cheat sheet of useful commands.

"},{"location":"DEV_ENV_SETUP/#removing-the-container-and-image","title":"Removing the container and image","text":"

A command to stop, remove the container and the image (replace netalertx and netalertx-netalertx with the appropriate values)

"},{"location":"DEV_ENV_SETUP/#restart-the-server-backend","title":"Restart the server backend","text":"

Most code changes can be tested without rebuilding the container. When working on the python server backend, you only need to restart the server.

  1. You can usually restart the backend via Maintenance > Logs > Restart server

  1. If above doesn't work, SSH into the container and kill & restart the main script loop

  2. sudo docker exec -it netalertx /bin/bash

  3. pkill -f \"python /app/server\" && python /app/server &

  4. If none of the above work, restart the docker container.

  5. This is usually the last resort as sometimes the Docker engine becomes unresponsive and the whole engine needs to be restarted.

"},{"location":"DEV_ENV_SETUP/#contributing-pull-requests","title":"Contributing & Pull Requests","text":""},{"location":"DEV_ENV_SETUP/#before-submitting-a-pr-please-ensure","title":"Before submitting a PR, please ensure:","text":"

\u2714 Changes are backward-compatible with existing installs. \u2714 No unnecessary changes are made. \u2714 New features are reusable, not narrowly scoped. \u2714 Features are implemented via plugins if possible.

"},{"location":"DEV_ENV_SETUP/#mandatory-test-cases","title":"Mandatory Test Cases","text":""},{"location":"DEV_ENV_SETUP/#unit-tests","title":"Unit Tests","text":"

Warning

Please note these test modify data in the database.

  1. See the /test directory for available test cases. These are not exhaustive but cover the main API endpoints.
  2. To run a test case, SSH into the container: sudo docker exec -it netalertx /bin/bash
  3. Inside the container, install pytest (if not already installed): pip install pytest
  4. Run a specific test case: pytest /app/test/TESTFILE.py
"},{"location":"DOCKER_COMPOSE/","title":"docker-compose.yaml Examples","text":"

Note

The container needs to run in network_mode:\"host\". This also means that not all functionality is supported on a Windows host as Docker for Windows doesn't support this networking option.

"},{"location":"DOCKER_COMPOSE/#example-1","title":"Example 1","text":"
services:\n  netalertx:\n    container_name: netalertx\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"ghcr.io/jokob-sk/netalertx:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local_path/config:/app/config\n      - local_path/db:/app/db      \n      # (optional) useful for debugging if you have issues setting up the container\n      - local_path/logs:/app/log\n      # (API: OPTION 1) use for performance\n      - type: tmpfs\n        target: /app/api\n      # (API: OPTION 2) use when debugging issues \n      # -  local_path/api:/app/api\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n

To run the container execute: sudo docker-compose up -d

"},{"location":"DOCKER_COMPOSE/#example-2","title":"Example 2","text":"

Example by SeimuS.

services:\n  netalertx:\n    container_name: NetAlertX\n    hostname: NetAlertX\n    privileged: true\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: ghcr.io/jokob-sk/netalertx:latest\n    environment:\n      - TZ=Europe/Bratislava\n    restart: always\n    volumes:\n      - ./netalertx/db:/app/db\n      - ./netalertx/config:/app/config\n    network_mode: host\n

To run the container execute: sudo docker-compose up -d

"},{"location":"DOCKER_COMPOSE/#example-3","title":"Example 3","text":"

docker-compose.yml

services:\n  netalertx:\n    container_name: netalertx\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"ghcr.io/jokob-sk/netalertx:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - ${APP_CONFIG_LOCATION}/netalertx/config:/app/config\n      - ${APP_DATA_LOCATION}/netalertx/db/:/app/db/      \n      # (optional) useful for debugging if you have issues setting up the container\n      - ${LOGS_LOCATION}:/app/log\n      # (API: OPTION 1) use for performance\n      - type: tmpfs\n        target: /app/api\n      # (API: OPTION 2) use when debugging issues \n      # -  local/path/api:/app/api\n    environment:\n      - TZ=${TZ}      \n      - PORT=${PORT}\n

.env file

#GLOBAL PATH VARIABLES\n\nAPP_DATA_LOCATION=/path/to/docker_appdata\nAPP_CONFIG_LOCATION=/path/to/docker_config\nLOGS_LOCATION=/path/to/docker_logs\n\n#ENVIRONMENT VARIABLES\n\nTZ=Europe/Paris\nPORT=20211\n\n#DEVELOPMENT VARIABLES\n\nDEV_LOCATION=/path/to/local/source/code\n

To run the container execute: sudo docker-compose --env-file /path/to/.env up

"},{"location":"DOCKER_COMPOSE/#example-4-docker-swarm","title":"Example 4: Docker swarm","text":"

Notice how the host network is defined in a swarm setup:

services:\n  netalertx:\n    # Use the below line if you want to test the latest dev image\n    # image: \"jokobsk/netalertx-dev:latest\"\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    volumes:\n      - /mnt/MYSERVER/netalertx/config:/config:rw\n      - /mnt/MYSERVER/netalertx/db:/netalertx/db:rw\n      - /mnt/MYSERVER/netalertx/logs:/netalertx/front/log:rw\n    environment:\n      - TZ=Europe/London\n      - PORT=20211\n    networks:\n      - outside\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n\nnetworks:\n  outside:\n    external:\n      name: \"host\"\n\n\n
"},{"location":"DOCKER_SWARM/","title":"Docker Swarm Deployment Guide (IPvlan)","text":"

This guide describes how to deploy NetAlertX in a Docker Swarm environment using an ipvlan network. This enables the container to receive a LAN IP address directly, which is ideal for network monitoring.

"},{"location":"DOCKER_SWARM/#step-1-create-an-ipvlan-config-only-network-on-all-nodes","title":"\u2699\ufe0f Step 1: Create an IPvlan Config-Only Network on All Nodes","text":"

Run this command on each node in the Swarm.

docker network create -d ipvlan \\\n  --subnet=192.168.1.0/24 \\              # \ud83d\udd27 Replace with your LAN subnet\n  --gateway=192.168.1.1 \\                # \ud83d\udd27 Replace with your LAN gateway\n  -o ipvlan_mode=l2 \\\n  -o parent=eno1 \\                       # \ud83d\udd27 Replace with your network interface (e.g., eth0, eno1)\n  --config-only \\\n  ipvlan-swarm-config\n
"},{"location":"DOCKER_SWARM/#step-2-create-the-swarm-scoped-ipvlan-network-one-time-setup","title":"\ud83d\udda5\ufe0f Step 2: Create the Swarm-Scoped IPvlan Network (One-Time Setup)","text":"

Run this on one Swarm manager node only.

docker network create -d ipvlan \\\n  --scope swarm \\\n  --config-from ipvlan-swarm-config \\\n  swarm-ipvlan\n
"},{"location":"DOCKER_SWARM/#step-3-deploy-netalertx-with-docker-compose","title":"\ud83e\uddfe Step 3: Deploy NetAlertX with Docker Compose","text":"

Use the following Compose snippet to deploy NetAlertX with a static LAN IP assigned via the swarm-ipvlan network.

services:\n  netalertx:\n    image: ghcr.io/jokob-sk/netalertx:latest\n    ports:\n      - 20211:20211\n    volumes:\n      - /mnt/YOUR_SERVER/netalertx/config:/app/config:rw\n      - /mnt/YOUR_SERVER/netalertx/db:/netalertx/app/db:rw\n      - /mnt/YOUR_SERVER/netalertx/logs:/netalertx/app/log:rw\n    environment:\n      - TZ=Europe/London\n      - PORT=20211\n    networks:\n      swarm-ipvlan:\n        ipv4_address: 192.168.1.240     # \u26a0\ufe0f Choose a free IP from your LAN\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n      placement:\n        constraints:\n          - node.role == manager        # \ud83d\udd04 Or use: node.labels.netalertx == true\n\nnetworks:\n  swarm-ipvlan:\n    external: true\n
"},{"location":"DOCKER_SWARM/#notes","title":"\u2705 Notes","text":""},{"location":"FILE_PERMISSIONS/","title":"Managing File Permissions for NetAlertX on Nginx with Docker","text":"

Tip

If you are facing permission issues, try to start the container without mapping your volumes. If that works, then the issue is permission related. You can try e.g., the following command: docker run -d --rm --network=host \\ -e TZ=Europe/Berlin \\ -e PUID=200 -e PGID=200 \\ -e PORT=20211 \\ ghcr.io/jokob-sk/netalertx:latest

NetAlertX runs on an Nginx web server. On Alpine Linux, Nginx operates as the nginx user (if PUID and GID environment variables are not specified, nginx user UID will be set to 102, and its supplementary group www-data ID to 82). Consequently, files accessed or written by the NetAlertX application are owned by nginx:www-data.

Upon starting, NetAlertX changes nginx user UID and www-data GID to specified values (or defaults), and the ownership of files on the host system mapped to /app/config and /app/db in the container to nginx:www-data. This ensures that Nginx can access and write to these files. Since the user in the Docker container is mapped to a user on the host system by ID:GID, the files in /app/config and /app/db on the host system are owned by a user with the same ID and GID (defaults are ID 102 and GID 82). On different systems, this ID:GID may belong to different users, or there may not be a group with ID 82 at all.

Option to set specific user UID and GID can be useful for host system users needing to access these files (e.g., backup scripts).

"},{"location":"FILE_PERMISSIONS/#permissions-table-for-individual-folders","title":"Permissions Table for Individual Folders","text":"Folder User User ID Group Group ID Permissions Notes /app/config nginx PUID (default 102) www-data PGID (default 82) rwxr-xr-x Ensure nginx can read/write; other users can read if in www-data /app/db nginx PUID (default 102) www-data PGID (default 82) rwxr-xr-x Same as above"},{"location":"FIX_OFFLINE_DETECTION/","title":"Troubleshooting: Devices Show Offline When They Are Online","text":"

In some network setups, certain devices may intermittently appear as offline in NetAlertX, even though they are connected and responsive. This issue is often more noticeable with devices that have higher IP addresses within the subnet.

Note

Network presence graph showing increased drop outs before enabling additional ICMP scans and continuous online presence after following this guide. This graph shows a sudden spike in drop outs probably caused by a device software update.

"},{"location":"FIX_OFFLINE_DETECTION/#symptoms","title":"Symptoms","text":""},{"location":"FIX_OFFLINE_DETECTION/#cause","title":"Cause","text":"

This issue is typically related to scanning limitations:

"},{"location":"FIX_OFFLINE_DETECTION/#recommended-fixes","title":"Recommended Fixes","text":"

To improve presence accuracy and reduce false offline states:

"},{"location":"FIX_OFFLINE_DETECTION/#increase-arp-scan-timeout","title":"\u2705 Increase ARP Scan Timeout","text":"

Extend the ARP scanner timeout to ensure full subnet coverage:

ARPSCAN_RUN_TIMEOUT=360\n

Adjust based on your network size and device count.

"},{"location":"FIX_OFFLINE_DETECTION/#add-icmp-ping-scanning","title":"\u2705 Add ICMP (Ping) Scanning","text":"

Enable the ICMP scan plugin to complement ARP detection. ICMP is often more reliable for detecting active hosts, especially when ARP fails.

"},{"location":"FIX_OFFLINE_DETECTION/#use-multiple-detection-methods","title":"\u2705 Use Multiple Detection Methods","text":"

A combined approach greatly improves detection robustness:

This hybrid strategy increases reliability, especially for down detection and alerting. See other plugins that might be compatible with your setup. See benefits and drawbacks of individual scan methods in their respective docs.

"},{"location":"FIX_OFFLINE_DETECTION/#results","title":"Results","text":"

After increasing the ARP timeout and adding ICMP scanning (on select IP ranges), users typically report:

"},{"location":"FIX_OFFLINE_DETECTION/#summary","title":"Summary","text":"Setting Recommendation ARPSCAN_RUN_TIMEOUT Increase to ensure scans reach all IPs ICMP Scan Enable to detect devices ARP might miss Multi-method Scanning Use a mix of ARP, ICMP, and NMAP-based methods

Tip: Each environment is unique. Consider fine-tuning scan settings based on your network size, device behavior, and desired detection accuracy.

Let us know in the NetAlertX Discussions if you have further feedback or edge cases.

See also Remote Networks for more advanced setups.

"},{"location":"FRONTEND_DEVELOPMENT/","title":"Frontend development","text":"

This page contains tips for frontend development when extending NetAlertX. Guiding principles are:

  1. Maintainability
  2. Extendability
  3. Reusability
  4. Placing more functionality into Plugins and enhancing core Plugins functionality

That means that, when writing code, focus on reusing what's available instead of writing quick fixes. Or creating reusable functions, instead of bespoke functionaility.

"},{"location":"FRONTEND_DEVELOPMENT/#examples","title":"\ud83d\udd0d Examples","text":"

Some examples how to apply the above:

Example 1

I want to implement a scan fucntion. Options would be:

  1. To add a manual scan functionality to the deviceDetails.php page.
  2. To create a separate page that handles the execution of the scan.
  3. To create a configurable Plugin.

From the above, number 3 would be the most appropriate solution. Then followed by number 2. Number 1 would be approved only in special circumstances.

Example 2

I want to change the behavior of the application. Options to implement this could be:

  1. Hard-code the changes in the code.
  2. Implement the changes and add settings to influence the behavior in the initialize.py file so the user can adjust these.
  3. Implement the changes and add settings via a setting-only plugin.
  4. Implement the changes in a way so the behavior can be toggled on each plugin so the core capabilities of Plugins get extended.

From the above, number 4 would be the most appropriate solution. Then followed by number 3. Number 1 or 2 would be approved only in special circumstances.

"},{"location":"FRONTEND_DEVELOPMENT/#frontend-tips","title":"\ud83d\udca1 Frontend tips","text":"

Some useful frontend JavaScript functions:

Check the common.js file for more frontend functions.

"},{"location":"HELPER_SCRIPTS/","title":"NetAlertX Community Helper Scripts Overview","text":"

This page provides an overview of community-contributed scripts for NetAlertX. These scripts are not actively maintained and are provided as-is.

"},{"location":"HELPER_SCRIPTS/#community-scripts","title":"Community Scripts","text":"

You can find all scripts in this scripts GitHub folder.

Script Name Description Author Version Release Date New Devices Checkmk Script Checks for new devices in NetAlertX and reports status to Checkmk. N/A 1.0 08-Jan-2025 DB Cleanup Script Queries and removes old device-related entries from the database. laxduke 1.0 23-Dec-2024 OPNsense DHCP Lease Converter Retrieves DHCP lease data from OPNsense and converts it to dnsmasq format. im-redactd 1.0 24-Feb-2025"},{"location":"HELPER_SCRIPTS/#important-notes","title":"Important Notes","text":"

Note

These scripts are community-supplied and not actively maintained. Use at your own discretion.

For detailed usage instructions, refer to each script's documentation in each scripts GitHub folder.

"},{"location":"HOME_ASSISTANT/","title":"Home Assistant integration overview","text":"

NetAlertX comes with MQTT support, allowing you to show all detected devices as devices in Home Assistant. It also supplies a collection of stats, such as number of online devices.

Tip

You can install NetAlertX also as a Home Assistant addon via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

"},{"location":"HOME_ASSISTANT/#note","title":"\u26a0 Note","text":""},{"location":"HOME_ASSISTANT/#guide","title":"\ud83e\udded Guide","text":"

\ud83d\udca1 This guide was tested only with the Mosquitto MQTT broker

  1. Enable Mosquitto MQTT in Home Assistant by following the documentation

  2. Configure a user name and password on your broker.

  3. Note down the following details that you will need to configure NetAlertX:

  4. Open the NetAlertX > Settings > MQTT settings group

"},{"location":"HOME_ASSISTANT/#screenshots","title":"\ud83d\udcf7 Screenshots","text":""},{"location":"HOME_ASSISTANT/#troubleshooting","title":"Troubleshooting","text":"

If you can't see all devices detected, run sudo arp-scan --interface=eth0 192.168.1.0/24 (change these based on your setup, read Subnets docs for details). This command has to be executed the NetAlertX container, not in the Home Assistant container.

You can access the NetAlertX container via Portainer on your host or via ssh. The container name will be something like addon_db21ed7f_netalertx (you can copy the db21ed7f_netalertx part from the browser when accessing the UI of NetAlertX).

"},{"location":"HOME_ASSISTANT/#accessing-the-netalertx-container-via-ssh","title":"Accessing the NetAlertX container via SSH","text":"
  1. Log into your Home Assistant host via SSH
local@local:~ $ ssh pi@192.168.1.9\n
  1. Find the NetAlertX container name, in this case addon_db21ed7f_netalertx
pi@raspberrypi:~ $ sudo docker container ls | grep netalertx\n06c540d97f67   ghcr.io/alexbelgium/netalertx-armv7:25.3.1                   \"/init\"               6 days ago      Up 6 days (healthy)    addon_db21ed7f_netalertx\n
  1. SSH into the NetAlertX cointainer
pi@raspberrypi:~ $ sudo docker exec -it addon_db21ed7f_netalertx  /bin/sh\n/ #\n
  1. Execute a test asrp-scan scan
/ # sudo arp-scan --ignoredups --retry=6 192.168.1.0/24 --interface=eth0\nInterface: eth0, type: EN10MB, MAC: dc:a6:32:73:8a:b1, IPv4: 192.168.1.9\nStarting arp-scan 1.10.0 with 256 hosts (https://github.com/royhills/arp-scan)\n192.168.1.1     74:ac:b9:54:09:fb       Ubiquiti Networks Inc.\n192.168.1.21    74:ac:b9:ad:c3:30       Ubiquiti Networks Inc.\n192.168.1.58    1c:69:7a:a2:34:7b       EliteGroup Computer Systems Co., LTD\n192.168.1.57    f4:92:bf:a3:f3:56       Ubiquiti Networks Inc.\n...\n

If your result doesn't contain results similar to the above, double check your subnet, interface and if you are dealing with an inaccessible network segment, read the Remote networks documentation.

"},{"location":"HW_INSTALL/","title":"How to install NetAlertX on the server hardware","text":"

To download and install NetAlertX on the hardware/server directly use the curl or wget commands at the bottom of this page.

Note

This is an Experimental feature \ud83e\uddea and it relies on community support.

\ud83d\ude4f Looking for maintainers for this installation method \ud83d\ude42 Current community volunteers: - slammingprogramming

There is no guarantee that the install script or any other script will gracefully handle other installed software. Data loss is a possibility, it is recommended to install NetAlertX using the supplied Docker image.

A warning to the installation method below: Piping to bash is controversial and may be dangerous, as you cannot see the code that's about to be executed on your system.

Alternatively you can download the installation script install/install.debian.sh from the repository and check the code yourself (beware other scripts are downloaded too - only from this repo).

NetAlertX will be installed in /app and run on port number 20211.

Some facts about what and where something will be changed/installed by the HW install setup (may not contain everything!):

"},{"location":"HW_INSTALL/#limitations","title":"Limitations","text":""},{"location":"HW_INSTALL/#installation-via-curl","title":"\ud83d\udce5 Installation via CURL","text":"

Tip

If the below fails try grabbing and installing one of the previous releases and run the installation from the zip package.

curl -o install.debian.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/install.debian.sh && sudo chmod +x install.debian.sh && sudo ./install.debian.sh\n
"},{"location":"HW_INSTALL/#installation-via-wget","title":"\ud83d\udce5 Installation via WGET","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/install.debian.sh -O install.debian.sh && sudo chmod +x install.debian.sh && sudo ./install.debian.sh\n

These commands will download the install.debian.sh script from the GitHub repository, make it executable with chmod, and then run it using ./install.debian.sh.

Make sure you have the necessary permissions to execute the script.

"},{"location":"ICONS/","title":"Icons","text":""},{"location":"ICONS/#icons-overview","title":"Icons overview","text":"

Icons are used to visually distinguish devices in the app in most of the device listing tables and the network tree.

"},{"location":"ICONS/#icons-support","title":"Icons Support","text":"

Two types of icons are suported:

You can assign icons individually on each device in the Details tab.

"},{"location":"ICONS/#adding-new-icons","title":"Adding new icons","text":"
  1. You can get an SVG or a Font awesome HTML code

Copying the SVG (for example from iconify.design):

Copying the HTML code from Font Awesome.

  1. Navigate to the device you want to use the icon on and click the \"+\" icon:

  1. Paste in the copied HTML or SVG code and click \"OK\":

  1. \"Save\" the device

Note

If you want to mass-apply an icon to all devices of the same device type (Field: Type), you can click the mass-copy button (next to the \"+\" button). A confirmation prompt is displayed. If you proceed, icons of all devices set to the same device type as the current device, will be overwritten with the current device's icon.

"},{"location":"ICONS/#font-awesome-pro-icons","title":"Font Awesome Pro icons","text":"

If you own the premium package of Font Awesome icons you can mount it in your Docker container the following way:

/font-awesome:/app/front/lib/font-awesome:ro\n

You can use the full range of Font Awesome icons afterwards.

"},{"location":"INITIAL_SETUP/","title":"\u26a1 Quick Start Guide","text":"

Get NetAlertX up and running in a few simple steps.

"},{"location":"INITIAL_SETUP/#1-configure-scanner-plugins","title":"1. Configure Scanner Plugin(s)","text":"

Tip

Enable additional plugins under Settings \u2192 LOADED_PLUGINS. Make sure to save your changes and reload the page to activate them.

Initial configuration: ARPSCAN, INTRNT

Note

ARPSCAN and INTRNT scan the current network. You can complement them with other \ud83d\udd0d dev scanner plugins like NMAPDEV, or import devices using \ud83d\udce5 importer plugins. See the Subnet & VLAN Setup Guide and Remote Networks for advanced configurations.

"},{"location":"INITIAL_SETUP/#2-choose-a-publisher-plugin","title":"2. Choose a Publisher Plugin","text":"

Initial configuration: SMTP

Note

Configure your SMTP settings or enable additional \u25b6\ufe0f publisher plugins to send alerts. For more flexibility, try \ud83d\udcda _publisher_apprise, which supports over 80 notification services.

"},{"location":"INITIAL_SETUP/#3-set-up-a-network-topology-diagram","title":"3. Set Up a Network Topology Diagram","text":"

Initial configuration: The app auto-selects a root node (MAC internet) and attempts to identify other network devices by vendor or name.

Note

Visualize and manage your network using the Network Guide. Some plugins (e.g., UNFIMP) build the topology automatically, or you can use Custom Workflows to generate it based on your own rules.

"},{"location":"INITIAL_SETUP/#4-configure-notifications","title":"4. Configure Notifications","text":"

Initial configuration: Notifies on new_devices, down_devices, and events as defined in NTFPRCS_INCLUDED_SECTIONS.

Note

Notification settings support global, plugin-specific, and per-device rules. For fine-tuning, refer to the Notification Guide.

"},{"location":"INITIAL_SETUP/#5-set-up-workflows","title":"5. Set Up Workflows","text":"

Initial configuration: N/A

Note

Automate responses to device status changes, group management, topology updates, and more. See the Workflows Guide to simplify your network operations.

"},{"location":"INITIAL_SETUP/#6-backup-your-configuration","title":"6. Backup Your Configuration","text":"

Initial configuration: The CSVBCKP plugin creates a daily backup to /config/devices.csv.

Note

For a complete backup strategy, follow the Backup Guide.

"},{"location":"INITIAL_SETUP/#7-optional-create-custom-plugins","title":"7. (Optional) Create Custom Plugins","text":"

Initial configuration: N/A

Note

Build your own scanner, importer, or publisher plugin. See the Plugin Development Guide and included video tutorials.

"},{"location":"INITIAL_SETUP/#recommended-guides","title":"\ud83d\udcc1 Recommended Guides","text":""},{"location":"INITIAL_SETUP/#troubleshooting-help","title":"\ud83d\udee0\ufe0f Troubleshooting & Help","text":"

Before opening a new issue:

Let me know if you want a condensed README version, separate pages for each section, or UI copy based on this!

"},{"location":"INSTALLATION/","title":"Installation","text":""},{"location":"INSTALLATION/#installation-options","title":"Installation options","text":"

NetAlertX can be installed several ways. The best supported option is Docker, followed by a supervised Home Assistant instance, as an Unraid app, and lastly, on bare metal.

"},{"location":"INSTALLATION/#help","title":"Help","text":"

If facing issues, please spend a few minutes seraching.

Note

If you can't find a solution anywhere, ask in Discord if you think it's a quick question, otherwise open a new issue. Please fill in as much as possible to speed up the help process.

"},{"location":"LOGGING/","title":"Logging","text":"

NetAlertX comes with several logs that help to identify application issues.

For plugin-specific log debugging, please read the Debug Plugins guide.

When debugging any issue, increase the LOG_LEVEL Setting as per the Debug tips documentation.

"},{"location":"LOGGING/#main-logs","title":"Main logs","text":"

You can find most of the logs exposed in the UI under Maintenance -> Logs.

If the UI is inaccessible, you can access them under /app/log.

In the Maintennace -> Logs you can Purge logs, download the full log file or Filter the lines with some substring to narrow down your search.

"},{"location":"LOGGING/#plugin-logging","title":"Plugin logging","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/). These files are processed at the end of the scan and deleted on successful processing.

The data is in most of the cases then displayed in the application under Integrations -> Plugins (or Device -> Plugins if the plugin is supplying device-specific data).

"},{"location":"MIGRATION/","title":"Migration form PiAlert to NetAlertX","text":"

Warning

Follow this guide only after you you downloaded and started a version of NetAlertX prior to v25.6.7 (e.g. docker pull ghcr.io/jokob-sk/netalertx:25.5.24) at least once after previously using the PiAlert image. Later versions don't support migration and devices and settings will have to migrated manually, e.g. via CSV import.

"},{"location":"MIGRATION/#steps","title":"STEPS:","text":"

Tip

In short: The application will auto-migrate the database, config, and all device information. A ticker message on top will be displayed until you update your docker mount points. It's always good to have a backup strategy in place.

  1. Backup your current config and database (optional devices.csv to have a backup) (See bellow tip if facing issues)
  2. Stop the container
  3. Update the Docker file mount locations in your docker-compose.yml or docker run command (See bellow New Docker mount locations).
  4. Rename the DB and conf files to app.db and app.conf and place them in the appropriate location.
  5. Start the Container

Tip

If you have troubles accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: cp -r /app/config /home/pi/pialert/config/old_backup_files. This should create a folder in the config directory called old_backup_files conatining all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.

"},{"location":"MIGRATION/#new-docker-mount-locations","title":"New Docker mount locations","text":"

The application installation folder in the docker container has changed from /home/pi/pialert to /app. That means the new mount points are:

Old mount point New mount point /home/pi/pialert/config /app/config /home/pi/pialert/db /app/db

If you were mounting files directly, please note the file names have changed:

Old file name New file name pialert.conf app.conf pialert.db app.db

Note

The application uses symlinks linking the old db and config locations to the new ones, so data loss should not occur. Backup strategies are still recommended to backup your setup.

"},{"location":"MIGRATION/#examples","title":"Examples","text":"

Examples of docker files with the new mount points.

"},{"location":"MIGRATION/#example-1-mapping-folders","title":"Example 1: Mapping folders","text":""},{"location":"MIGRATION/#old-docker-composeyml","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"jokobsk/pialert:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config:/home/pi/pialert/config  \n      - local/path/db:/home/pi/pialert/db         \n      # (optional) useful for debugging if you have issues setting up the container\n      - local/path/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \u26a0  This has changed (\ud83d\udfe1optional) \n    container_name: netalertx                 # \u26a0  This has changed (\ud83d\udfe1optional) \n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"ghcr.io/jokob-sk/netalertx:latest\"         # \u26a0  This has changed (\ud83d\udfe1optional/\ud83d\udd3arequired in future) \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config:/app/config         # \u26a0  This has changed (\ud83d\udd3arequired) \n      - local/path/db:/app/db                 # \u26a0  This has changed (\ud83d\udd3arequired) \n      # (optional) useful for debugging if you have issues setting up the container\n      - local/path/logs:/app/log        # \u26a0  This has changed (\ud83d\udfe1optional) \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#example-2-mapping-files","title":"Example 2: Mapping files","text":"

Note

The recommendation is to map folders as in Example 1, map files directly only when needed.

"},{"location":"MIGRATION/#old-docker-composeyml_1","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"jokobsk/pialert:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config/pialert.conf:/home/pi/pialert/config/pialert.conf  \n      - local/path/db/pialert.db:/home/pi/pialert/db/pialert.db         \n      # (optional) useful for debugging if you have issues setting up the container\n      - local/path/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml_1","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \u26a0  This has changed (\ud83d\udfe1optional) \n    container_name: netalertx                 # \u26a0  This has changed (\ud83d\udfe1optional) \n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"ghcr.io/jokob-sk/netalertx:latest\"         # \u26a0  This has changed (\ud83d\udfe1optional/\ud83d\udd3arequired in future) \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config/app.conf:/app/config/app.conf # \u26a0  This has changed (\ud83d\udd3arequired) \n      - local/path/db/app.db:/app/db/app.db             # \u26a0  This has changed (\ud83d\udd3arequired) \n      # (optional) useful for debugging if you have issues setting up the container\n      - local/path/logs:/app/log                  # \u26a0  This has changed (\ud83d\udfe1optional) \n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n
"},{"location":"NAME_RESOLUTION/","title":"Device Name Resolution","text":"

Name resolution in NetAlertX relies on multiple plugins to resolve device names from IP addresses. If you are seeing (name not found) as device names, follow these steps to diagnose and fix the issue.

Tip

Before proceeding, make sure Reverse DNS is enabled on your network. You can control how names are handled and cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

"},{"location":"NAME_RESOLUTION/#required-plugins","title":"Required Plugins","text":"

For best results, ensure the following name resolution plugins are enabled:

You can check which plugins are active in your Settings section and enable any that are missing.

There are other plugins that can supply device names as well, but they rely on bespoke hardware and services. See Plugins overview for details and look for plugins with name discovery (\ud83c\udd8e) features.

"},{"location":"NAME_RESOLUTION/#checking-logs","title":"Checking Logs","text":"

If names are not resolving, check the logs for errors or timeouts.

See how to explore logs in the Logging guide.

Logs will show which plugins attempted resolution and any failures encountered.

"},{"location":"NAME_RESOLUTION/#adjusting-timeout-settings","title":"Adjusting Timeout Settings","text":"

If resolution is slow or failing due to timeouts, increase the timeout settings in your configuration, for example.

NSLOOKUP_RUN_TIMEOUT = 30\n

Raising the timeout may help if your network has high latency or slow DNS responses.

"},{"location":"NAME_RESOLUTION/#checking-plugin-objects","title":"Checking Plugin Objects","text":"

Each plugin stores results in its respective object. You can inspect these objects to see if they contain valid name resolution data.

See Logging guide and Debug plugins guides for details.

If the object contains no results, the issue may be with DNS settings or network access.

"},{"location":"NAME_RESOLUTION/#improving-name-resolution","title":"Improving name resolution","text":"

For more details how to improve name resolution refer to the Reverse DNS Documentation.

"},{"location":"NETWORK_TREE/","title":"Network Topology","text":""},{"location":"NETWORK_TREE/#how-to-set-up-your-network-page","title":"How to Set Up Your Network Page","text":"

The Network page lets you map how devices connect \u2014 visually and logically. It\u2019s especially useful for planning infrastructure, assigning parent-child relationships, and spotting gaps.

To get started, you\u2019ll need to define at least one root node and mark certain devices as network nodes (like Switches or Routers).

Start by creating a root device with the MAC address Internet, if the application didn\u2019t create one already. This special MAC address (Internet) is required for the root network node \u2014 no other value is currently supported. Set its Type to a valid network type \u2014 such as Router or Gateway.

Tip

If you don\u2019t have one, use the Create new device button on the Devices page to add a root device.

"},{"location":"NETWORK_TREE/#quick-setup","title":"\u26a1 Quick Setup","text":"
  1. Open the device you want to use as a network node (e.g. a Switch).
  2. Set its Type to one of the following: AP, Firewall, Gateway, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN (Or add custom types under Settings \u2192 General \u2192 NETWORK_DEVICE_TYPES.)
  3. Save the device.
  4. Go to the Network page \u2014 supported device types will appear as tabs.
  5. Use the Assign button to connect unassigned devices to a network node.
  6. If the Port is 0 or empty, a Wi-Fi icon is shown. Otherwise, an Ethernet icon appears.

Note

Use bulk editing with CSV Export to fix Internet root assignments or update many devices at once.

"},{"location":"NETWORK_TREE/#example-setting-up-a-raspberrypi-as-a-switch","title":"Example: Setting up a raspberrypi as a Switch","text":"

Let\u2019s walk through setting up a device named raspberrypi to act as a network Switch that other devices connect through.

"},{"location":"NETWORK_TREE/#1-set-device-type-and-parent","title":"1. Set Device Type and Parent","text":"

Note

Only certain device types can act as network nodes: AP, Firewall, Gateway, Hypervisor, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN You can add custom types via the NETWORK_DEVICE_TYPES setting.

"},{"location":"NETWORK_TREE/#2-confirm-the-device-appears-as-a-network-node","title":"2. Confirm The Device Appears as a Network Node","text":"

You can confirm that raspberrypi now acts as a network device in two places:

"},{"location":"NETWORK_TREE/#3-assign-connected-devices","title":"3. Assign Connected Devices","text":"

Hovering over devices in the tree reveals connection details and tooltips for quick inspection.

"},{"location":"NETWORK_TREE/#summary","title":"\u2705 Summary","text":"

To configure devices on the Network page:

Need to reset or undo changes? Use backups or bulk editing to manage devices at scale. You can also automate device assignment with Workflows.

"},{"location":"NOTIFICATIONS/","title":"Notifications \ud83d\udce7","text":"

There are 4 ways how to influence notifications:

  1. On the device itself
  2. On the settings of the plugin
  3. Globally
  4. Ignoring devices

Note

It's recommended to use the same schedule interval for all plugins responsible for scanning devices, otherwise false positives might be reported if different devices are discovered by different plugins. Check the Settings > Enabled settings section for a warning:

"},{"location":"NOTIFICATIONS/#device-settings","title":"Device settings \ud83d\udcbb","text":"

The following device properties influence notifications. You can:

  1. Alert Events - Enables alerts of connections, disconnections, IP changes (down and down reconnected notifications are still sent even if this is disabled).
  2. Alert Down - Alerts when a device goes down. This setting overrides a disabled Alert Events setting, so you will get a notification of a device going down even if you don't have Alert Events ticked. Disabling this will disable down and down reconnected notifications on the device.
  3. Skip repeated notifications, if for example you know there is a temporary issue and want to pause the same notification for this device for a given time.
  4. Require NICs Online - Indicates whether this device should be considered online only if all associated NICs (devices with the nic relationship type) are online. If disabled, the device is considered online if any NIC is online. If a NIC is online it sets the parent (this) device's status to online irrespectivelly of the detected device's status. The Relationship type is set on the childern device.

Note

Please read through the NTFPRCS plugin documentation to understand how device and global settings influence the notification processing.

"},{"location":"NOTIFICATIONS/#plugin-settings","title":"Plugin settings \ud83d\udd0c","text":"

On almost all plugins there are 2 core settings, <plugin>_WATCH and <plugin>_REPORT_ON.

  1. <plugin>_WATCH specifies the columns which the app should watch. If watched columns change the device state is considered changed. This changed status is then used to decide to send out notifications based on the <plugin>_REPORT_ON setting.
  2. <plugin>_REPORT_ON let's you specify on which events the app should notify you. This is related to the <plugin>_WATCH setting. So if you select watched-changed and in <plugin>_WATCH you only select Watched_Value1, then a notification is triggered if Watched_Value1 is changed from the previous value, but no notification is send if Watched_Value2 changes.

Click the Read more in the docs. Link at the top of each plugin to get more details on how the given plugin works.

"},{"location":"NOTIFICATIONS/#global-settings","title":"Global settings \u2699","text":"

In Notification Processing settings, you can specify blanket rules. These allow you to specify exceptions to the Plugin and Device settings and will override those.

  1. Notify on (NTFPRCS_INCLUDED_SECTIONS) allows you to specify which events trigger notifications. Usual setups will have new_devices, down_devices, and possibly down_reconnected set. Including plugin (dependenton the Plugin <plugin>_WATCH and <plugin>_REPORT_ON settings) and events (dependent on the on-device Alert Events setting) might be too noisy for most setups. More info in the NTFPRCS plugin on what events these selections include.
  2. Alert down after (NTFPRCS_alert_down_time) is useful if you want to wait for some time before the system sends out a down notification for a device. This is related to the on-device Alert down setting and only devices with this checked will trigger a down notification.
  3. A filter to allow you to set device-specific exceptions to New devices being added to the app.
  4. A filter to allow you to set device-specific exceptions to generated Events.
"},{"location":"NOTIFICATIONS/#ignoring-devices","title":"Ignoring devices \ud83d\udd15","text":"

You can completely ignore detected devices globally. This could be because your instance detects docker containers, you want to ignore devices from a specific manufacturer via MAC rules or you want to ignore devices on a specific IP range.

  1. Ignored MACs (NEWDEV_ignored_MACs) - List of MACs to ignore.
  2. Ignored IPs (NEWDEV_ignored_IPs) - List of IPs to ignore.
"},{"location":"PERFORMANCE/","title":"Performance Optimization Guide","text":"

There are several ways to improve the application's performance. The application has been tested on a range of devices, from a Raspberry Pi 4 to NAS and NUC systems. If you are running the application on a lower-end device, carefully fine-tune the performance settings to ensure an optimal user experience.

"},{"location":"PERFORMANCE/#common-causes-of-slowness","title":"Common Causes of Slowness","text":"

Performance issues are usually caused by:

The application performs regular maintenance and database cleanup. If these tasks fail, performance may degrade.

"},{"location":"PERFORMANCE/#database-and-log-file-size","title":"Database and Log File Size","text":"

A large database or oversized log files can slow down performance. You can check database and table sizes on the Maintenance page.

Note

"},{"location":"PERFORMANCE/#maintenance-plugins","title":"Maintenance Plugins","text":"

Two plugins help maintain the application\u2019s performance:

"},{"location":"PERFORMANCE/#1-database-cleanup-dbclnp","title":"1. Database Cleanup (DBCLNP)","text":""},{"location":"PERFORMANCE/#2-maintenance-maint","title":"2. Maintenance (MAINT)","text":""},{"location":"PERFORMANCE/#scan-frequency-and-coverage","title":"Scan Frequency and Coverage","text":"

Frequent scans increase resource usage, network traffic, and database read/write cycles.

"},{"location":"PERFORMANCE/#optimizations","title":"Optimizations","text":"

Some plugins have additional options to limit the number of scanned devices. If certain plugins take too long to complete, check if you can optimize scan times by selecting a scan range.

For example, the ICMP plugin allows you to specify a regular expression to scan only IPs that match a specific pattern.

"},{"location":"PERFORMANCE/#storing-temporary-files-in-memory","title":"Storing Temporary Files in Memory","text":"

On systems with slower I/O speeds, you can optimize performance by storing temporary files in memory. This primarily applies to the /app/api and /app/log folders.

Using tmpfs reduces disk writes and improves performance. However, it should be disabled if persistent logs or API data storage are required.

Below is an optimized docker-compose.yml snippet:

version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    # Uncomment the line below to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config:/app/config\n      - local/path/db:/app/db      \n      # (Optional) Useful for debugging setup issues\n      - local/path/logs:/app/log\n      # (API: OPTION 1) Store temporary files in memory (recommended for performance)\n      - type: tmpfs              # \u25c0 \ud83d\udd3a\n        target: /app/api         # \u25c0 \ud83d\udd3a\n      # (API: OPTION 2) Store API data on disk (useful for debugging)\n      # - local/path/api:/app/api\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n\n
"},{"location":"PIHOLE_GUIDE/","title":"Integration with PiHole","text":"

NetAlertX comes with 2 plugins suitable for integarting with your existing PiHole instace. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other plugins.

"},{"location":"PIHOLE_GUIDE/#approach-1-dhcplss-plugin-import-devices-from-the-pihole-dhcp-leases-file","title":"Approach 1: DHCPLSS Plugin - Import devices from the PiHole DHCP leases file","text":""},{"location":"PIHOLE_GUIDE/#settings","title":"Settings","text":"Setting Description Recommended value DHCPLSS_RUN When the plugin should run. schedule DHCPLSS_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * DHCPLSS_paths_to_check You need to map the value in this setting in the docker-compose.yml file. The in-container path must contain pihole so it's parsed correctly. ['/etc/pihole/dhcp.leases']

Check the DHCPLSS plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes","title":"docker-compose changes","text":"Path Description :/etc/pihole/dhcp.leases PiHole's dhcp.leases file. Required if you want to use PiHole dhcp.leases file. This has to be matched with a corresponding DHCPLSS_paths_to_check setting entry (the path in the container must contain pihole)"},{"location":"PIHOLE_GUIDE/#approach-2-pihole-plugin-import-devices-directly-from-the-pihole-database","title":"Approach 2: PIHOLE Plugin - Import devices directly from the PiHole database","text":"Setting Description Recommended value PIHOLE_RUN When the plugin should run. schedule PIHOLE_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * PIHOLE_DB_PATH You need to map the value in this setting in the docker-compose.yml file. /etc/pihole/pihole-FTL.db

Check the PiHole plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes_1","title":"docker-compose changes","text":"Path Description :/etc/pihole/pihole-FTL.db PiHole's pihole-FTL.db database file.

Check out other plugins that can help you discover more about your network or check how to scan Remote networks.

"},{"location":"PLUGINS/","title":"\ud83d\udd0c Plugins","text":"

NetAlertX supports additional plugins to extend its functionality, each with its own settings and options. Plugins can be loaded via the General -> LOADED_PLUGINS setting. For custom plugin development, refer to the Plugin development guide.

Note

Please check this Plugins debugging guide and the corresponding Plugin documentation in the below table if you are facing issues.

"},{"location":"PLUGINS/#quick-start","title":"\u26a1 Quick start","text":"

Tip

You can load additional Plugins via the General -> LOADED_PLUGINS setting. You need to save the settings for the new plugins to load (cache/page reload may be necessary).

  1. Pick your \ud83d\udd0d dev scanner plugin (e.g. ARPSCAN or NMAPDEV), or import devices into the application with an \ud83d\udce5 importer plugin. (See Enabling plugins below)
  2. Pick a \u25b6\ufe0f publisher plugin, if you want to send notifications. If you don't see a publisher you'd like to use, look at the \ud83d\udcda_publisher_apprise plugin which is a proxy for over 80 notification services.
  3. Setup your Network topology diagram
  4. Fine-tune Notifications
  5. Setup Workflows
  6. Backup your setup
  7. Contribute and Create custom plugins
"},{"location":"PLUGINS/#plugin-types","title":"Plugin types","text":"Plugin type Icon Description When to run Required Data source ? publisher \u25b6\ufe0f Sending notifications to services. on_notification \u2716 Script dev scanner \ud83d\udd0d Create devices in the app, manages online/offline device status. schedule \u2716 Script / SQLite DB name discovery \ud83c\udd8e Discovers names of devices via various protocols. before_name_updates, schedule \u2716 Script importer \ud83d\udce5 Importing devices from another service. schedule \u2716 Script / SQLite DB system \u2699 Providing core system functionality. schedule / always on \u2716/\u2714 Script / Template other \u267b Other plugins misc \u2716 Script / Template"},{"location":"PLUGINS/#features","title":"Features","text":"Icon Description \ud83d\udda7 Auto-imports the network topology diagram \ud83d\udd04 Has the option to sync some data back into the plugin source"},{"location":"PLUGINS/#available-plugins","title":"Available Plugins","text":"

Device-detecting plugins insert values into the CurrentScan database table. The plugins that are not required are safe to ignore, however, it makes sense to have at least some device-detecting plugins enabled, such as ARPSCAN or NMAPDEV.

ID Plugin docs Type Description Features Required APPRISE _publisher_apprise \u25b6\ufe0f Apprise notification proxy ARPSCAN arp_scan \ud83d\udd0d ARP-scan on current network AVAHISCAN avahi_scan \ud83c\udd8e Avahi (mDNS-based) name resolution ASUSWRT asuswrt_import \ud83d\udd0d Import connected devices from AsusWRT CSVBCKP csv_backup \u2699 CSV devices backup CUSTPROP custom_props \u2699 Managing custom device properties values Yes DBCLNP db_cleanup \u2699 Database cleanup Yes* DDNS ddns_update \u2699 DDNS update DHCPLSS dhcp_leases \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e Import devices from DHCP leases DHCPSRVS dhcp_servers \u267b DHCP servers DIGSCAN dig_scan \ud83c\udd8e Dig (DNS) Name resolution FREEBOX freebox \ud83d\udd0d/\u267b/\ud83c\udd8e Pull data and names from Freebox/Iliadbox ICMP icmp_scan \u267b ICMP (ping) status checker INTRNT internet_ip \ud83d\udd0d Internet IP scanner INTRSPD internet_speedtest \u267b Internet speed test IPNEIGH ipneigh \ud83d\udd0d Scan ARP (IPv4) and NDP (IPv6) tables LUCIRPC luci_import \ud83d\udd0d Import connected devices from OpenWRT MAINT maintenance \u2699 Maintenance of logs, etc. MQTT _publisher_mqtt \u25b6\ufe0f MQTT for synching to Home Assistant NBTSCAN nbtscan_scan \ud83c\udd8e Nbtscan (NetBIOS-based) name resolution NEWDEV newdev_template \u2699 New device template Yes NMAP nmap_scan \u267b Nmap port scanning & discovery NMAPDEV nmap_dev_scan \ud83d\udd0d Nmap dev scan on current network NSLOOKUP nslookup_scan \ud83c\udd8e NSLookup (DNS-based) name resolution NTFPRCS notification_processing \u2699 Notification processing Yes NTFY _publisher_ntfy \u25b6\ufe0f NTFY notifications OMDSDN omada_sdn_imp \ud83d\udce5/\ud83c\udd8e \u274c UNMAINTAINED use OMDSDNOPENAPI \ud83d\udda7 \ud83d\udd04 OMDSDNOPENAPI omada_sdn_openapi \ud83d\udce5/\ud83c\udd8e OMADA TP-Link import via OpenAPI \ud83d\udda7 PIHOLE pihole_scan \ud83d\udd0d/\ud83c\udd8e/\ud83d\udce5 Pi-hole device import & sync PUSHSAFER _publisher_pushsafer \u25b6\ufe0f Pushsafer notifications PUSHOVER _publisher_pushover \u25b6\ufe0f Pushover notifications SETPWD set_password \u2699 Set password Yes SMTP _publisher_email \u25b6\ufe0f Email notifications SNMPDSC snmp_discovery \ud83d\udd0d/\ud83d\udce5 SNMP device import & sync SYNC sync \ud83d\udd0d/\u2699/\ud83d\udce5 Sync & import from NetAlertX instances \ud83d\udda7 \ud83d\udd04 Yes TELEGRAM _publisher_telegram \u25b6\ufe0f Telegram notifications UI ui_settings \u267b UI specific settings Yes UNFIMP unifi_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import & sync \ud83d\udda7 UNIFIAPI unifi_api_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import (SM API, multi-site) VNDRPDT vendor_update \u2699 Vendor database update WEBHOOK _publisher_webhook \u25b6\ufe0f Webhook notifications WEBMON website_monitor \u267b Website down monitoring WOL wake_on_lan \u267b Automatic wake-on-lan

* The database cleanup plugin (DBCLNP) is not required but the app will become unusable after a while if not executed. \u274c marked for removal/unmaintained - looking for help \u231aIt's recommended to use the same schedule interval for all plugins responsible for discovering new devices.

"},{"location":"PLUGINS/#enabling-plugins","title":"Enabling plugins","text":"

Plugins can be enabled via Settings, and can be disabled as needed.

  1. Research which plugin you'd like to use, enable DISCOVER_PLUGINS and load the required plugins in Settings via the LOADED_PLUGINS setting.
  2. Save the changes and review the Settings of the newly loaded plugins.
  3. Change the <prefix>_RUN Setting to the recommended or custom value as per the documentation of the given setting
"},{"location":"PLUGINS/#disabling-unloading-and-ignoring-plugins","title":"Disabling, Unloading and Ignoring plugins","text":"
  1. Change the <prefix>_RUN Setting to disabled if you want to disable the plugin, but keep the settings
  2. If you want to speed up the application, you can unload the plugin by unselecting it in the LOADED_PLUGINS setting.
  3. You can completely ignore plugins by placing a ignore_plugin file into the plugin directory. Ignored plugins won't show up in the LOADED_PLUGINS setting.
"},{"location":"PLUGINS/#developing-new-custom-plugins","title":"\ud83c\udd95 Developing new custom plugins","text":"

If you want to develop a custom plugin, please read this Plugin development guide.

"},{"location":"PLUGINS_DEV/","title":"Creating a custom plugin","text":"

NetAlertX comes with a plugin system to feed events from third-party scripts into the UI and then send notifications, if desired. The highlighted core functionality this plugin system supports, is:

(Currently, update/overwriting of existing objects is only supported for devices via the CurrentScan table.)

"},{"location":"PLUGINS_DEV/#watch-the-video","title":"\ud83c\udfa5 Watch the video:","text":"

Tip

Read this guide Development environment setup guide to set up your local environment for development. \ud83d\udc69\u200d\ud83d\udcbb

"},{"location":"PLUGINS_DEV/#screenshots","title":"\ud83d\udcf8 Screenshots","text":""},{"location":"PLUGINS_DEV/#use-cases","title":"Use cases","text":"

Example use cases for plugins could be:

If you wish to develop a plugin, please check the existing plugin structure. Once the settings are saved by the user they need to be removed from the app.conf file manually if you want to re-initialize them from the config.json of the plugin.

"},{"location":"PLUGINS_DEV/#disclaimer","title":"\u26a0 Disclaimer","text":"

Please read the below carefully if you'd like to contribute with a plugin yourself. This documentation file might be outdated, so double-check the sample plugins as well.

"},{"location":"PLUGINS_DEV/#plugin-file-structure-overview","title":"Plugin file structure overview","text":"

\u26a0\ufe0fFolder name must be the same as the code name value in: \"code_name\": \"<value>\" Unique prefix needs to be unique compared to the other settings prefixes, e.g.: the prefix APPRISE is already in use.

File Required (plugin type) Description config.json yes Contains the plugin configuration (manifest) including the settings available to the user. script.py no The Python script itself. You may call any valid linux command. last_result.<prefix>.log no The file used to interface between NetAlertX and the plugin. Required for a script plugin if you want to feed data into the app. Stored in the /api/log/plugins/ script.log no Logging output (recommended) README.md yes Any setup considerations or overview

More on specifics below.

"},{"location":"PLUGINS_DEV/#column-order-and-values-plugins-interface-contract","title":"Column order and values (plugins interface contract)","text":"

Important

Spend some time reading and trying to understand the below table. This is the interface between the Plugins and the core application. The application expets 9 or 13 values The first 9 values are mandatory. The next 4 values (HelpVal1 to HelpVal4) are optional. However, if you use any of these optional values (e.g., HelpVal1), you need to supply all optional values (e.g., HelpVal2, HelpVal3, and HelpVal4). If a value is not used, it should be padded with null.

Order Represented Column Value Required Description 0 Object_PrimaryID yes The primary ID used to group Events under. 1 Object_SecondaryID no Optional secondary ID to create a relationship beween other entities, such as a MAC address 2 DateTime yes When the event occured in the format 2023-01-02 15:56:30 3 Watched_Value1 yes A value that is watched and users can receive notifications if it changed compared to the previously saved entry. For example IP address 4 Watched_Value2 no As above 5 Watched_Value3 no As above 6 Watched_Value4 no As above 7 Extra no Any other data you want to pass and display in NetAlertX and the notifications 8 ForeignKey no A foreign key that can be used to link to the parent object (usually a MAC address) 9 HelpVal1 no (optional) A helper value 10 HelpVal2 no (optional) A helper value 11 HelpVal3 no (optional) A helper value 12 HelpVal4 no (optional) A helper value

Note

De-duplication is run once an hour on the Plugins_Objects database table and duplicate entries with the same value in columns Object_PrimaryID, Object_SecondaryID, Plugin (auto-filled based on unique_prefix of the plugin), UserData (can be populated with the \"type\": \"textbox_save\" column type) are removed.

"},{"location":"PLUGINS_DEV/#configjson-structure","title":"config.json structure","text":"

The config.json file is the manifest of the plugin. It contains mainly settings definitions and the mapping of Plugin objects to NetAlertX objects.

"},{"location":"PLUGINS_DEV/#execution-order","title":"Execution order","text":"

The execution order is used to specify when a plugin is executed. This is useful if a plugin has access and surfaces more information than others. If a device is detected by 2 plugins and inserted into the CurrentScan table, the plugin with the higher priority (e.g.: Level_0 is a higher priority than Level_1) will insert it's values first. These values (devices) will be then prioritized over any values inserted later.

{\n    \"execution_order\" : \"Layer_0\"\n}\n
"},{"location":"PLUGINS_DEV/#supported-data-sources","title":"Supported data sources","text":"

Currently, these data sources are supported (valid data_source value).

Name data_source value Needs to return a \"table\"* Overview (more details on this page below) Script script no Executes any linux command in the CMD setting. NetAlertX DB query app-db-query yes Executes a SQL query on the NetAlertX database in the CMD setting. Template template no Used to generate internal settings, such as default values. External SQLite DB query sqlite-db-query yes Executes a SQL query from the CMD setting on an external SQLite database mapped in the DB_PATH setting. Plugin type plugin_type no Specifies the type of the plugin and in which section the Plugin settings are displayed ( one of general/system/scanner/other/publisher ).

\ud83d\udd0eExample json \"data_source\": \"app-db-query\" If you want to display plugin objects or import devices into the app, data sources have to return a \"table\" of the exact structure as outlined above.

You can show or hide the UI on the \"Plugins\" page and \"Plugins\" tab for a plugin on devices via the show_ui property:

\ud83d\udd0eExample json \"show_ui\": true,

"},{"location":"PLUGINS_DEV/#data_source-script","title":"\"data_source\": \"script\"","text":"

If the data_source is set to script the CMD setting (that you specify in the settings array section in the config.json) contains an executable Linux command, that usually generates a last_result.<prefix>.log file (not required if you don't import any data into the app). The last_result.<prefix>.log file needs to be saved in /api/log/plugins.

Important

A lot of the work is taken care of by the plugin_helper.py library. You don't need to manage the last_result.<prefix>.log file if using the helper objects. Check other script.py of other plugins for details.

The content of the last_result.<prefix>.log file needs to contain the columns as defined in the \"Column order and values\" section above. The order of columns can't be changed. After every scan it should contain only the results from the latest scan/execution.

"},{"location":"PLUGINS_DEV/#last_resultprefixlog-examples","title":"\ud83d\udd0e last_result.prefix.log examples","text":"

Valid CSV:

\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|null|null|null|null\nhttps://www.duckduckgo.com|192.168.0.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|ff:ee:ff:11:ff:11\n\n

Invalid CSV with different errors on each line:

\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898||null|null|null\nhttps://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|\n|https://www.duckduckgo.com|null|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine|null\nnull|192.168.1.1|2023-01-02 15:56:30|200|0.9898|null|null|Best search engine\nhttps://www.duckduckgo.com|192.168.1.1|2023-01-02 15:56:30|null|0.9898|null|null|Best search engine\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|||\nhttps://www.google.com|null|2023-01-02 15:56:30|200|0.7898|\n\n
"},{"location":"PLUGINS_DEV/#data_source-app-db-query","title":"\"data_source\": \"app-db-query\"","text":"

If the data_source is set to app-db-query, the CMD setting needs to contain a SQL query rendering the columns as defined in the \"Column order and values\" section above. The order of columns is important.

This SQL query is executed on the app.db SQLite database file.

\ud83d\udd0eExample

SQL query example:

SQL SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, 'null' as Watched_Value3, 'null' as Watched_Value4, ns.Extra as Extra, dv.devMac as ForeignKey FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac

Required CMD setting example with above query (you can set \"type\": \"label\" if you want it to make uneditable in the UI):

json { \"function\": \"CMD\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast( SUBSTR(ns.Port ,0, INSTR(ns.Port , '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, 'null' as Watched_Value3, 'null' as Watched_Value4, ns.Extra as Extra FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac\", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"SQL to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"This SQL query is used to populate the coresponding UI tables under the Plugins section.\" }] }

"},{"location":"PLUGINS_DEV/#data_source-template","title":"\"data_source\": \"template\"","text":"

In most cases, it is used to initialize settings. Check the newdev_template plugin for details.

"},{"location":"PLUGINS_DEV/#data_source-sqlite-db-query","title":"\"data_source\": \"sqlite-db-query\"","text":"

You can execute a SQL query on an external database connected to the current NetAlertX database via a temporary EXTERNAL_<unique prefix>. prefix.

For example for PIHOLE (\"unique_prefix\": \"PIHOLE\") it is EXTERNAL_PIHOLE.. The external SQLite database file has to be mapped in the container to the path specified in the DB_PATH setting:

\ud83d\udd0eExample

json ... { \"function\": \"DB_PATH\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [{\"readonly\": \"true\"}] ,\"transformers\": []}]}, \"default_value\":\"/etc/pihole/pihole-FTL.db\", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"DB Path\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"Required setting for the <code>sqlite-db-query</code> plugin type. Is used to mount an external SQLite database and execute the SQL query stored in the <code>CMD</code> setting.\" }] } ...

The actual SQL query you want to execute is then stored as a CMD setting, similar to a Plugin of the app-db-query plugin type. The format has to adhere to the format outlined in the \"Column order and values\" section above.

\ud83d\udd0eExample

Notice the EXTERNAL_PIHOLE. prefix.

json { \"function\": \"CMD\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"SELECT hwaddr as Object_PrimaryID, cast('http://' || (SELECT ip FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as VARCHAR(100)) || ':' || cast( SUBSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), 0, INSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1), '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, macVendor as Watched_Value1, lastQuery as Watched_Value2, (SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC, ip LIMIT 1) as Watched_Value3, 'null' as Watched_Value4, '' as Extra, hwaddr as ForeignKey FROM EXTERNAL_PIHOLE.network WHERE hwaddr NOT LIKE 'ip-%' AND hwaddr <> '00:00:00:00:00:00'; \", \"options\": [], \"localized\": [\"name\", \"description\"], \"name\" : [{ \"language_code\":\"en_us\", \"string\" : \"SQL to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"This SQL query is used to populate the coresponding UI tables under the Plugins section. This particular one selects data from a mapped PiHole SQLite database and maps it to the corresponding Plugin columns.\" }] }

"},{"location":"PLUGINS_DEV/#filters","title":"\ud83d\udd73 Filters","text":"

Plugin entries can be filtered in the UI based on values entered into filter fields. The txtMacFilter textbox/field contains the Mac address of the currently viewed device, or simply a Mac address that's available in the mac query string (<url>?mac=aa:22:aa:22:aa:22:aa).

Property Required Description compare_column yes Plugin column name that's value is used for comparison (Left side of the equation) compare_operator yes JavaScript comparison operator compare_field_id yes The id of a input text field containing a value is used for comparison (Right side of the equation) compare_js_template yes JavaScript code used to convert left and right side of the equation. {value} is replaced with input values. compare_use_quotes yes If true then the end result of the compare_js_template i swrapped in \" quotes. Use to compare strings.

Filters are only applied if a filter is specified, and the txtMacFilter is not undefined, or empty (--).

\ud83d\udd0eExample:

json \"data_filters\": [ { \"compare_column\" : \"Object_PrimaryID\", \"compare_operator\" : \"==\", \"compare_field_id\": \"txtMacFilter\", \"compare_js_template\": \"'{value}'.toString()\", \"compare_use_quotes\": true } ],

  1. On the pluginsCore.php page is an input field with the id txtMacFilter:

html <input class=\"form-control\" id=\"txtMacFilter\" type=\"text\" value=\"--\">

  1. This input field is initialized via the &mac= query string.

  2. The app then proceeds to use this Mac value from this field and compares it to the value of the Object_PrimaryID database field. The compare_operator is ==.

  3. Both values, from the database field Object_PrimaryID and from the txtMacFilter are wrapped and evaluated with the compare_js_template, that is '{value}.toString()'.

  4. compare_use_quotes is set to true so '{value}'.toString() is wrappe dinto \" quotes.

  5. This results in for example this code:

javascript // left part of the expression coming from compare_column and right from the input field // notice the added quotes ()\") around the left and right part of teh expression \"eval('ac:82:ac:82:ac:82\".toString()')\" == \"eval('ac:82:ac:82:ac:82\".toString()')\"

"},{"location":"PLUGINS_DEV/#mapping-the-plugin-results-into-a-database-table","title":"\ud83d\uddfa Mapping the plugin results into a database table","text":"

Plugin results are always inserted into the standard Plugin_Objects database table. Optionally, NetAlertX can take the results of the plugin execution, and insert these results into an additional database table. This is enabled by with the property \"mapped_to_table\" in the config.json file. The mapping of the columns is defined in the database_column_definitions array.

Note

If results are mapped to the CurrentScan table, the data is then included into the regular scan loop, so for example notification for devices are sent out.

\ud83d\udd0d Example:

For example, this approach is used to implement the DHCPLSS plugin. The script parses all supplied \"dhcp.leases\" files, gets the results in the generic table format outlined in the \"Column order and values\" section above, takes individual values, and inserts them into the CurrentScan database table in the NetAlertX database. All this is achieved by:

  1. Specifying the database table into which the results are inserted by defining \"mapped_to_table\": \"CurrentScan\" in the root of the config.json file as shown below:

json { \"code_name\": \"dhcp_leases\", \"unique_prefix\": \"DHCPLSS\", ... \"data_source\": \"script\", \"localized\": [\"display_name\", \"description\", \"icon\"], \"mapped_to_table\": \"CurrentScan\", ... } 2. Defining the target column with the mapped_to_column property for individual columns in the database_column_definitions array of the config.json file. For example in the DHCPLSS plugin, I needed to map the value of the Object_PrimaryID column returned by the plugin, to the cur_MAC column in the NetAlertX database table CurrentScan. Notice the \"mapped_to_column\": \"cur_MAC\" key-value pair in the sample below.

json { \"column\": \"Object_PrimaryID\", \"mapped_to_column\": \"cur_MAC\", \"css_classes\": \"col-sm-2\", \"show\": true, \"type\": \"device_mac\", \"default_value\":\"\", \"options\": [], \"localized\": [\"name\"], \"name\":[{ \"language_code\":\"en_us\", \"string\" : \"MAC address\" }] }

  1. That's it. The app takes care of the rest. It loops thru the objects discovered by the plugin, takes the results line-by-line, and inserts them into the database table specified in \"mapped_to_table\". The columns are translated from the generic plugin columns to the target table columns via the \"mapped_to_column\" property in the column definitions.

Note

You can create a column mapping with a default value via the mapped_to_column_data property. This means that the value of the given column will always be this value. That also means that the \"column\": \"NameDoesntMatter\" is not important as there is no database source column.

\ud83d\udd0d Example:

json { \"column\": \"NameDoesntMatter\", \"mapped_to_column\": \"cur_ScanMethod\", \"mapped_to_column_data\": { \"value\": \"DHCPLSS\" }, \"css_classes\": \"col-sm-2\", \"show\": true, \"type\": \"device_mac\", \"default_value\":\"\", \"options\": [], \"localized\": [\"name\"], \"name\":[{ \"language_code\":\"en_us\", \"string\" : \"MAC address\" }] }

"},{"location":"PLUGINS_DEV/#params","title":"params","text":"

Important

An esier way to access settings in scripts is the get_setting_value method. ```python from helper import get_setting_value

... NTFY_TOPIC = get_setting_value('NTFY_TOPIC') ...

```

The params array in the config.json is used to enable the user to change the parameters of the executed script. For example, the user wants to monitor a specific URL.

\ud83d\udd0e Example: Passing user-defined settings to a command. Let's say, you want to have a script, that is called with a user-defined parameter called urls:

bash root@server# python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com

{\n    \"params\" : [\n        {\n            \"name\"  : \"urls\",\n            \"type\"  : \"setting\",\n            \"value\" : \"WEBMON_urls_to_check\"\n        }]\n}\n
 {\n            \"function\": \"CMD\",\n            \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [] ,\"transformers\": []}]},\n            \"default_value\":\"python3 /app/front/plugins/website_monitor/script.py urls={urls}\",\n            \"options\": [],\n            \"localized\": [\"name\", \"description\"],\n            \"name\" : [{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Command\"\n            }],\n            \"description\": [{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Command to run\"\n            }]\n        }\n

During script execution, the app will take the command \"python3 /app/front/plugins/website_monitor/script.py urls={urls}\", take the {urls} wildcard and replace it with the value from the WEBMON_urls_to_check setting. This is because:

  1. The app checks the params entries
  2. It finds \"name\" : \"urls\"
  3. Checks the type of the urls params and finds \"type\" : \"setting\"
  4. Gets the setting name from \"value\" : \"WEBMON_urls_to_check\"
  5. IMPORTANT: in the config.json this setting is identified by \"function\":\"urls_to_check\", not \"function\":\"WEBMON_urls_to_check\"
  6. You can also use a global setting, or a setting from a different plugin
  7. The app gets the user defined value from the setting with the code name WEBMON_urls_to_check
  8. let's say the setting with the code name WEBMON_urls_to_check contains 2 values entered by the user:
  9. WEBMON_urls_to_check=['https://google.com','https://duck.com']
  10. The app takes the value from WEBMON_urls_to_check and replaces the {urls} wildcard in the setting where \"function\":\"CMD\", so you go from:
  11. python3 /app/front/plugins/website_monitor/script.py urls={urls}
  12. to
  13. python3 /app/front/plugins/website_monitor/script.py urls=https://google.com,https://duck.com

Below are some general additional notes, when defining params:

\ud83d\udd0eExample:

json { \"params\" : [{ \"name\" : \"ips\", \"type\" : \"sql\", \"value\" : \"SELECT devLastIP from DEVICES\", \"timeoutMultiplier\" : true }, { \"name\" : \"macs\", \"type\" : \"sql\", \"value\" : \"SELECT devMac from DEVICES\" }, { \"name\" : \"timeout\", \"type\" : \"setting\", \"value\" : \"NMAP_RUN_TIMEOUT\" }, { \"name\" : \"args\", \"type\" : \"setting\", \"value\" : \"NMAP_ARGS\", \"base64\" : true }] }

"},{"location":"PLUGINS_DEV/#setting-object-structure","title":"\u2699 Setting object structure","text":"

Note

The settings flow and when Plugin specific settings are applied is described under the Settings system.

Required attributes are:

Property Description \"function\" Specifies the function the setting drives or a simple unique code name. See Supported settings function values for options. \"type\" Specifies the form control used for the setting displayed in the Settings page and what values are accepted. Supported options include: - {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"input\", \"elementOptions\" : [{\"type\":\"password\"}] ,\"transformers\": [\"sha256\"]}]} \"localized\" A list of properties on the current JSON level that need to be localized. \"name\" Displayed on the Settings page. An array of localized strings. See Localized strings below. \"description\" Displayed on the Settings page. An array of localized strings. See Localized strings below. (optional) \"events\" Specifies whether to generate an execution button next to the input field of the setting. Supported values: - \"test\" - For notification plugins testing - \"run\" - Regular plugins testing (optional) \"override_value\" Used to determine a user-defined override for the setting. Useful for template-based plugins, where you can choose to leave the current value or override it with the value defined in the setting. (Work in progress) (optional) \"events\" Used to trigger the plugin. Usually used on the RUN setting. Not fully tested in all scenarios. Will show a play button next to the setting. After clicking, an event is generated for the backend in the Parameters database table to process the front-end event on the next run."},{"location":"PLUGINS_DEV/#ui-component-types-documentation","title":"UI Component Types Documentation","text":"

This section outlines the structure and types of UI components, primarily used to build HTML forms or interactive elements dynamically. Each UI component has a \"type\" which defines its structure, behavior, and rendering options.

"},{"location":"PLUGINS_DEV/#ui-component-json-structure","title":"UI Component JSON Structure","text":"

The UI component is defined as a JSON object containing a list of elements. Each element specifies how it should behave, with properties like elementType, elementOptions, and any associated transformers to modify the data. The example below demonstrates how a component with two elements (span and select) is structured:

{\n      \"function\": \"devIcon\",\n      \"type\": {\n        \"dataType\": \"string\",\n        \"elements\": [\n          {\n            \"elementType\": \"span\",\n            \"elementOptions\": [\n              { \"cssClasses\": \"input-group-addon iconPreview\" },\n              { \"getStringKey\": \"Gen_SelectToPreview\" },\n              { \"customId\": \"NEWDEV_devIcon_preview\" }\n            ],\n            \"transformers\": []\n          },\n          {\n            \"elementType\": \"select\",\n            \"elementHasInputValue\": 1,\n            \"elementOptions\": [\n              { \"cssClasses\": \"col-xs-12\" },\n              {\n                \"onChange\": \"updateIconPreview(this)\"\n              },\n              { \"customParams\": \"NEWDEV_devIcon,NEWDEV_devIcon_preview\" }\n            ],\n            \"transformers\": []\n          }          \n        ]\n      }\n}\n\n
"},{"location":"PLUGINS_DEV/#rendering-logic","title":"Rendering Logic","text":"

The code snippet provided demonstrates how the elements are iterated over to generate their corresponding HTML. Depending on the elementType, different HTML tags (like <select>, <input>, <textarea>, <button>, etc.) are created with the respective attributes such as onChange, my-data-type, and class based on the provided elementOptions. Events can also be attached to elements like buttons or select inputs.

"},{"location":"PLUGINS_DEV/#key-element-types","title":"Key Element Types","text":"

Each element may also have associated events (e.g., running a scan or triggering a notification) defined under Events.

"},{"location":"PLUGINS_DEV/#supported-settings-function-values","title":"Supported settings function values","text":"

You can have any \"function\": \"my_custom_name\" custom name, however, the ones listed below have a specific functionality attached to them.

Setting Description RUN (required) Specifies when the service is executed. Supported Options: - \"disabled\" - do not run - \"once\" - run on app start or on settings saved - \"schedule\" - if included, then a RUN_SCHD setting needs to be specified to determine the schedule - \"always_after_scan\" - run always after a scan is finished - \"before_name_updates\" - run before device names are updated (for name discovery plugins) - \"on_new_device\" - run when a new device is detected - \"before_config_save\" - run before the config is marked as saved. Useful if your plugin needs to modify the app.conf file. RUN_SCHD (required if you include \"schedule\" in the above RUN function) Cron-like scheduling is used if the RUN setting is set to schedule. CMD (required) Specifies the command that should be executed. API_SQL (not implemented) Generates a table_ + code_name + .json file as per API docs. RUN_TIMEOUT (optional) Specifies the maximum execution time of the script. If not specified, a default value of 10 seconds is used to prevent hanging. WATCH (optional) Specifies which database columns are watched for changes for this particular plugin. If not specified, no notifications are sent. REPORT_ON (optional) Specifies when to send a notification. Supported options are: - new means a new unique (unique combination of PrimaryId and SecondaryId) object was discovered. - watched-changed - means that selected Watched_ValueN columns changed - watched-not-changed - reports even on events where selected Watched_ValueN did not change - missing-in-last-scan - if the object is missing compared to previous scans

\ud83d\udd0e Example:

json { \"function\": \"RUN\", \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"select\", \"elementOptions\" : [] ,\"transformers\": []}]}, \"default_value\":\"disabled\", \"options\": [\"disabled\", \"once\", \"schedule\", \"always_after_scan\", \"on_new_device\"], \"localized\": [\"name\", \"description\"], \"name\" :[{ \"language_code\":\"en_us\", \"string\" : \"When to run\" }], \"description\": [{ \"language_code\":\"en_us\", \"string\" : \"Enable a regular scan of your services. If you select <code>schedule</code> the scheduling settings from below are applied. If you select <code>once</code> the scan is run only once on start of the application (container) for the time specified in <a href=\\\"#WEBMON_RUN_TIMEOUT\\\"><code>WEBMON_RUN_TIMEOUT</code> setting</a>.\" }] }

"},{"location":"PLUGINS_DEV/#localized-strings","title":"\ud83c\udf0dLocalized strings","text":"

\ud83d\udd0e Example:

```json

{\n    \"language_code\":\"en_us\",\n    \"string\" : \"When to run\"\n}\n

```

"},{"location":"PLUGINS_DEV/#ui-settings-in-database_column_definitions","title":"UI settings in database_column_definitions","text":"

The UI will adjust how columns are displayed in the UI based on the resolvers definition of the database_column_definitions object. These are the supported form controls and related functionality:

Supported Types Description label Displays a column only. textarea_readonly Generates a read only text area and cleans up the text to display it somewhat formatted with new lines preserved. See below for information on threshold, replace. options Property Used in conjunction with types like threshold, replace, regex. options_params Property Used in conjunction with a \"options\": \"[{value}]\" template and text.select/list.select. Can specify SQL query (needs to return 2 columns SELECT devName as name, devMac as id) or Setting (not tested) to populate the dropdown. Check example below or have a look at the NEWDEV plugin config.json file. threshold The options array contains objects ordered from the lowest maximum to the highest. The corresponding hexColor is used for the value background color if it's less than the specified maximum but more than the previous one in the options array. replace The options array contains objects with an equals property, which is compared to the \"value.\" If the values are the same, the string in replacement is displayed in the UI instead of the actual \"value\". regex Applies a regex to the value. The options array contains objects with an type (must be set to regex) and param (must contain the regex itself) property. Type Definitions device_mac The value is considered to be a MAC address, and a link pointing to the device with the given MAC address is generated. device_ip The value is considered to be an IP address. A link pointing to the device with the given IP is generated. The IP is checked against the last detected IP address and translated into a MAC address, which is then used for the link itself. device_name_mac The value is considered to be a MAC address, and a link pointing to the device with the given MAC is generated. The link label is resolved as the target device name. url The value is considered to be a URL, so a link is generated. textbox_save Generates an editable and saveable text box that saves values in the database. Primarily intended for the UserData database column in the Plugins_Objects table. url_http_https Generates two links with the https and http prefix as lock icons. eval Evaluates as JavaScript. Use the variable value to use the given column value as input (e.g. '<b>${value}<b>' (replace ' with ` in your code) )

Note

Supports chaining. You can chain multiple resolvers with .. For example regex.url_http_https. This will apply the regex resolver and then the url_http_https resolver.

        \"function\": \"devType\",\n        \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\" : \"select\", \"elementOptions\" : [] ,\"transformers\": []}]},\n        \"maxLength\": 30,\n        \"default_value\": \"\",\n        \"options\": [\"{value}\"],\n        \"options_params\" : [\n            {\n                \"name\"  : \"value\",\n                \"type\"  : \"sql\",\n                \"value\" : \"SELECT '' as id, '' as name UNION SELECT devType as id, devType as name FROM (SELECT devType FROM Devices UNION SELECT 'Smartphone' UNION SELECT 'Tablet' UNION SELECT 'Laptop' UNION SELECT 'PC' UNION SELECT 'Printer' UNION SELECT 'Server' UNION SELECT 'NAS' UNION SELECT 'Domotic' UNION SELECT 'Game Console' UNION SELECT 'SmartTV' UNION SELECT 'Clock' UNION SELECT 'House Appliance' UNION SELECT 'Phone' UNION SELECT 'AP' UNION SELECT 'Gateway' UNION SELECT 'Firewall' UNION SELECT 'Switch' UNION SELECT 'WLAN' UNION SELECT 'Router' UNION SELECT 'Other') AS all_devices ORDER BY id;\"\n            },\n            {\n                \"name\"  : \"uilang\",\n                \"type\"  : \"setting\",\n                \"value\" : \"UI_LANG\"\n            }\n        ]\n
{\n            \"column\": \"Watched_Value1\",\n            \"css_classes\": \"col-sm-2\",\n            \"show\": true,\n            \"type\": \"threshold\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"maximum\": 199,\n                    \"hexColor\": \"#792D86\"                \n                },\n                {\n                    \"maximum\": 299,\n                    \"hexColor\": \"#5B862D\"\n                },\n                {\n                    \"maximum\": 399,\n                    \"hexColor\": \"#7D862D\"\n                },\n                {\n                    \"maximum\": 499,\n                    \"hexColor\": \"#BF6440\"\n                },\n                {\n                    \"maximum\": 599,\n                    \"hexColor\": \"#D33115\"\n                }\n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Status code\"\n                }]\n        },        \n        {\n            \"column\": \"Status\",\n            \"show\": true,\n            \"type\": \"replace\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"equals\": \"watched-not-changed\",\n                    \"replacement\": \"<i class='fa-solid fa-square-check'></i>\"\n                },\n                {\n                    \"equals\": \"watched-changed\",\n                    \"replacement\": \"<i class='fa-solid fa-triangle-exclamation'></i>\"\n                },\n                {\n                    \"equals\": \"new\",\n                    \"replacement\": \"<i class='fa-solid fa-circle-plus'></i>\"\n                }\n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"Status\"\n                }]\n        },\n        {\n            \"column\": \"Watched_Value3\",\n            \"css_classes\": \"col-sm-1\",\n            \"show\": true,\n            \"type\": \"regex.url_http_https\",            \n            \"default_value\":\"\",\n            \"options\": [\n                {\n                    \"type\": \"regex\",\n                    \"param\": \"([\\\\d.:]+)\"\n                }          \n            ],\n            \"localized\": [\"name\"],\n            \"name\":[{\n                \"language_code\":\"en_us\",\n                \"string\" : \"HTTP/s links\"\n                },\n                {\n                \"language_code\":\"es_es\",\n                \"string\" : \"N/A\"\n                }]\n        }\n
"},{"location":"RANDOM_MAC/","title":"Privacy & Random MAC's","text":"

Some operating systems incorporate randomize MAC addresses to improve privacy.

This functionality allows you to hide the real MAC of the device and assign a random MAC when we connect to WIFI networks.

This behavior is especially useful when connecting to WIFI's that we do not know, but it is totally useless when connecting to our own WIFI's or known networks.

I recommend disabling this on-device functionality when connecting our devices to our own WIFI's, this way, NetAlertX will be able to identify the device, and it will not identify it as a new device every so often (every time iOS or Android randomizes the MAC).

Random MACs are recognized by the characters \"2\", \"6\", \"A\", or \"E\" as the 2nd character in the Mac address. You can disable specific prefixes to be detected as random MAC addresses by specifying the UI_NOT_RANDOM_MAC setting.

"},{"location":"RANDOM_MAC/#windows","title":"Windows","text":""},{"location":"RANDOM_MAC/#ios","title":"IOS","text":""},{"location":"RANDOM_MAC/#android","title":"Android","text":""},{"location":"REMOTE_NETWORKS/","title":"Scanning Remote or Inaccessible Networks","text":"

By design, local network scanners such as arp-scan use ARP (Address Resolution Protocol) to map IP addresses to MAC addresses on the local network. Since ARP operates at Layer 2 (Data Link Layer), it typically works only within a single broadcast domain, usually limited to a single router or network segment.

Note

Ping and ARPSCAN use different protocols so even if you can ping devices it doesn't mean ARPSCAN can detect them.

To scan multiple locally accessible network segments, add them as subnets according to the subnets documentation. If ARPSCAN is not suitable for your setup, read on.

"},{"location":"REMOTE_NETWORKS/#complex-use-cases","title":"Complex Use Cases","text":"

The following network setups might make some devices undetectable with ARPSCAN. Check the specific setup to understand the cause and find potential workarounds to report on these devices.

"},{"location":"REMOTE_NETWORKS/#wi-fi-extenders","title":"Wi-Fi Extenders","text":"

Wi-Fi extenders typically create a separate network or subnet, which can prevent network scanning tools like arp-scan from detecting devices behind the extender.

Possible workaround: Scan the specific subnet that the extender uses, if it is separate from the main network.

"},{"location":"REMOTE_NETWORKS/#vpns","title":"VPNs","text":"

ARP operates at Layer 2 (Data Link Layer) and works only within a local area network (LAN). VPNs, which operate at Layer 3 (Network Layer), route traffic between networks, preventing ARP requests from discovering devices outside the local network.

VPNs use virtual interfaces (e.g., tun0, tap0) to encapsulate traffic, bypassing ARP-based discovery. Additionally, many VPNs use NAT, which masks individual devices behind a shared IP address.

Possible workaround: Configure the VPN to bridge networks instead of routing to enable ARP, though this depends on the VPN setup and security requirements.

"},{"location":"REMOTE_NETWORKS/#other-workarounds","title":"Other Workarounds","text":"

The following workarounds should work for most complex network setups.

"},{"location":"REMOTE_NETWORKS/#supplementing-plugins","title":"Supplementing Plugins","text":"

You can use supplementary plugins that employ alternate methods. Protocols used by the SNMPDSC or DHCPLSS plugins are widely supported on different routers and can be effective as workarounds. Check the plugins list to find a plugin that works with your router and network setup.

"},{"location":"REMOTE_NETWORKS/#multiple-netalertx-instances","title":"Multiple NetAlertX Instances","text":"

If you have servers in different networks, you can set up separate NetAlertX instances on those subnets and synchronize the results into one instance using the SYNC plugin.

"},{"location":"REMOTE_NETWORKS/#manual-entry","title":"Manual Entry","text":"

If you don't need to discover new devices and only need to report on their status (online, offline, down), you can manually enter devices and check their status using the ICMP plugin, which uses the ping command internally.

For more information on how to add devices manually (or dummy devices), refer to the Device Management documentation.

To create truly dummy devices, you can use a loopback IP address (e.g., 0.0.0.0 or 127.0.0.1) so they appear online.

"},{"location":"REMOTE_NETWORKS/#nmap-and-fake-mac-addresses","title":"NMAP and Fake MAC Addresses","text":"

Scanning remote networks with NMAP is possible (via the NMAPDEV plugin), but since it cannot retrieve the MAC address, you need to enable the NMAPDEV_FAKE_MAC setting. This will generate a fake MAC address based on the IP address, allowing you to track devices. However, this can lead to inconsistencies, especially if the IP address changes or a previously logged device is rediscovered. If this setting is disabled, only the IP address will be discovered, and devices with missing MAC addresses will be skipped.

Check the NMAPDEV plugin for details

"},{"location":"REVERSE_DNS/","title":"Reverse DNS","text":""},{"location":"REVERSE_DNS/#setting-up-better-name-discovery-with-reverse-dns","title":"Setting up better name discovery with Reverse DNS","text":"

If you are running a DNS server, such as AdGuard, set up Private reverse DNS servers for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.

Tip

Before proceeding, ensure that name resolution plugins are enabled. You can customize how names are cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

Example 1: Reverse DNS disabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58 ** server can't find 58.1.168.192.in-addr.arpa: NXDOMAIN

Example 2: Reverse DNS enabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58 45.1.168.192.in-addr.arpa name = jokob-NUC.localdomain.

"},{"location":"REVERSE_DNS/#enabling-reverse-dns-in-adguard","title":"Enabling reverse DNS in AdGuard","text":"
  1. Navigate to Settings -> DNS Settings
  2. Locate Private reverse DNS servers
  3. Enter your router IP address, such as 192.168.1.1
  4. Make sure you have Use private reverse DNS resolvers ticked.
  5. Click Apply to save your settings.
"},{"location":"REVERSE_DNS/#specifying-the-dns-in-the-container","title":"Specifying the DNS in the container","text":"

You can specify the DNS server in the docker-compose to improve name resolution on your network.

services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    restart: unless-stopped\n    volumes:\n      -  /home/netalertx/config:/app/config\n      -  /home/netalertx/db:/app/db\n      -  /home/netalertx/log:/app/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n    network_mode: host\n    dns:           # specifying the DNS servers used for the container\n      - 10.8.0.1\n      - 10.8.0.17\n
"},{"location":"REVERSE_DNS/#using-a-custom-resolvconf-file","title":"Using a custom resolv.conf file","text":"

You can configure a custom /etc/resolv.conf file in docker-compose.yml and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant resolv.conf man entry for details.

"},{"location":"REVERSE_DNS/#docker-composeyml","title":"docker-compose.yml:","text":"
version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:latest\"\n    restart: unless-stopped\n    volumes:\n      - ./config/app.conf:/app/config/app.conf\n      - ./db:/app/db\n      - ./log:/app/log\n      - ./config/resolv.conf:/etc/resolv.conf                          # Mapping the /resolv.conf file for better name resolution\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n    ports:\n      - \"20211:20211\"\n    network_mode: host\n
"},{"location":"REVERSE_DNS/#configresolvconf","title":"./config/resolv.conf:","text":"

The most important below is the nameserver entry (you can add multiple):

nameserver 192.168.178.11\noptions edns0 trust-ad\nsearch example.com\n
"},{"location":"REVERSE_PROXY/","title":"Reverse Proxy Configuration","text":"

Submitted by amazing cvc90 \ud83d\ude4f

Note

There are 2 NGINX files for NetAlertX, one for the bare-metal Debian install (netalertx.debian.conf), and one for the docker container (netalertx.template.conf). Both can be found in the install folder. Map, or use, the one appropriate for your setup.

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-direct-path","title":"NGINX HTTP Configuration (Direct Path)","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 80; \n     server_name netalertx; \n     proxy_preserve_host on; \n     proxy_pass http://localhost:20211/; \n     proxy_pass_reverse http://localhost:20211/; \n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-sub-path","title":"NGINX HTTP Configuration (Sub Path)","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 80; \n     server_name netalertx; \n     proxy_preserve_host on; \n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;         \n     }\n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#nginx-http-configuration-sub-path-with-module-ngx_http_sub_module","title":"NGINX HTTP Configuration (Sub Path) with module ngx_http_sub_module","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 80; \n     server_name netalertx; \n     proxy_preserve_host on; \n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n      sub_filter_once off;\n      sub_filter_types *;\n      sub_filter 'href=\"/' 'href=\"/netalertx/';\n      sub_filter '(?>$host)/css' '/netalertx/css';\n      sub_filter '(?>$host)/js'  '/netalertx/js';\n      sub_filter '/img' '/netalertx/img';\n      sub_filter '/lib' '/netalertx/lib';\n      sub_filter '/php' '/netalertx/php';               \n     }\n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/

NGINX HTTPS Configuration (Direct Path)

  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 443; \n     server_name netalertx; \n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     proxy_preserve_host on; \n     proxy_pass http://localhost:20211/; \n     proxy_pass_reverse http://localhost:20211/; \n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/

NGINX HTTPS Configuration (Sub Path)

  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 443; \n     server_name netalertx; \n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;     \n     }\n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#nginx-https-configuration-sub-path-with-module-ngx_http_sub_module","title":"NGINX HTTPS Configuration (Sub Path) with module ngx_http_sub_module","text":"
  1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx

  2. In this file, paste the following code:

   server { \n     listen 443; \n     server_name netalertx; \n     SSLEngine On;\n     SSLCertificateFile /etc/ssl/certs/netalertx.pem;\n     SSLCertificateKeyFile /etc/ssl/private/netalertx.key;\n     location ^~ /netalertx/ {\n          proxy_pass http://localhost:20211/;\n          proxy_pass_reverse http://localhost:20211/; \n          proxy_redirect ~^/(.*)$ /netalertx/$1;\n          rewrite ^/netalertx/?(.*)$ /$1 break;\n      sub_filter_once off;\n      sub_filter_types *;\n      sub_filter 'href=\"/' 'href=\"/netalertx/';\n      sub_filter '(?>$host)/css' '/netalertx/css';\n      sub_filter '(?>$host)/js'  '/netalertx/js';\n      sub_filter '/img' '/netalertx/img';\n      sub_filter '/lib' '/netalertx/lib';\n      sub_filter '/php' '/netalertx/php';       \n     }\n    }\n
  1. Activate the new website by running the following command:

nginx -s reload or systemctl restart nginx

  1. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/

"},{"location":"REVERSE_PROXY/#apache-http-configuration-direct-path","title":"Apache HTTP Configuration (Direct Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:80>\n         ServerName netalertx\n         ProxyPreserveHost On\n         ProxyPass / http://localhost:20211/\n         ProxyPassReverse / http://localhost:20211/\n    </VirtualHost>\n
  1. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#apache-http-configuration-sub-path","title":"Apache HTTP Configuration (Sub Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:80>\n         ServerName netalertx\n         location ^~ /netalertx/ {\n               ProxyPreserveHost On\n               ProxyPass / http://localhost:20211/\n               ProxyPassReverse / http://localhost:20211/\n         }\n    </VirtualHost>\n
  1. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at http://netalertx/

"},{"location":"REVERSE_PROXY/#apache-https-configuration-direct-path","title":"Apache HTTPS Configuration (Direct Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:443>\n         ServerName netalertx\n         SSLEngine On\n         SSLCertificateFile /etc/ssl/certs/netalertx.pem\n         SSLCertificateKeyFile /etc/ssl/private/netalertx.key\n         ProxyPreserveHost On\n         ProxyPass / http://localhost:20211/\n         ProxyPassReverse / http://localhost:20211/\n    </VirtualHost>\n
  1. Activate the new website by running the following command:

    a2ensite netalertx or service apache2 reload

  2. Once Apache restarts, you should be able to access the proxy website at https://netalertx/

"},{"location":"REVERSE_PROXY/#apache-https-configuration-sub-path","title":"Apache HTTPS Configuration (Sub Path)","text":"
  1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.

  2. In this file, paste the following code:

    <VirtualHost *:443> \n        ServerName netalertx\n        SSLEngine On \n        SSLCertificateFile /etc/ssl/certs/netalertx.pem\n        SSLCertificateKeyFile /etc/ssl/private/netalertx.key\n        location ^~ /netalertx/ {\n              ProxyPreserveHost On\n              ProxyPass / http://localhost:20211/\n              ProxyPassReverse / http://localhost:20211/\n        }\n    </VirtualHost>\n
  1. Activate the new website by running the following command:

a2ensite netalertx or service apache2 reload

  1. Once Apache restarts, you should be able to access the proxy website at https://netalertx/netalertx/
"},{"location":"REVERSE_PROXY/#reverse-proxy-example-by-using-linuxservers-swag-container","title":"Reverse proxy example by using LinuxServer's SWAG container.","text":"

Submitted by s33d1ing. \ud83d\ude4f

"},{"location":"REVERSE_PROXY/#linuxserverswag","title":"linuxserver/swag","text":"

In the SWAG container create /config/nginx/proxy-confs/netalertx.subfolder.conf with the following contents:

## Version 2023/02/05\n# make sure that your netalertx container is named netalertx\n# netalertx does not require a base url setting\n\n# Since NetAlertX uses a Host network, you may need to use the IP address of the system running NetAlertX for $upstream_app.\n\nlocation /netalertx {\n    return 301 $scheme://$host/netalertx/;\n}\n\nlocation ^~ /netalertx/ {\n    # enable the next two lines for http auth\n    #auth_basic \"Restricted\";\n    #auth_basic_user_file /config/nginx/.htpasswd;\n\n    # enable for ldap auth (requires ldap-server.conf in the server block)\n    #include /config/nginx/ldap-location.conf;\n\n    # enable for Authelia (requires authelia-server.conf in the server block)\n    #include /config/nginx/authelia-location.conf;\n\n    # enable for Authentik (requires authentik-server.conf in the server block)\n    #include /config/nginx/authentik-location.conf;\n\n    include /config/nginx/proxy.conf;\n    include /config/nginx/resolver.conf;\n\n    set $upstream_app netalertx;\n    set $upstream_port 20211;\n    set $upstream_proto http;\n\n    proxy_pass $upstream_proto://$upstream_app:$upstream_port;\n    proxy_set_header Accept-Encoding \"\";\n\n    proxy_redirect ~^/(.*)$ /netalertx/$1;\n    rewrite ^/netalertx/?(.*)$ /$1 break;\n\n    sub_filter_once off;\n    sub_filter_types *;\n\n    sub_filter 'href=\"/' 'href=\"/netalertx/';\n\n    sub_filter '(?>$host)/css' '/netalertx/css';\n    sub_filter '(?>$host)/js'  '/netalertx/js';\n\n    sub_filter '/img' '/netalertx/img';\n    sub_filter '/lib' '/netalertx/lib';\n    sub_filter '/php' '/netalertx/php';\n}\n
"},{"location":"REVERSE_PROXY/#traefik","title":"Traefik","text":"

Submitted by Isegrimm \ud83d\ude4f (based on this discussion)

Asuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.

Note: Everything in these configs assumes 'www.domain.com' as your domainname and 'section31' as an arbitrary name for your certificate setup. You will have to substitute these with your own.

Also, I use the prefix 'netalertx'. If you want to use another prefix, change it in these files: dynamic.toml and default.

Content of my yaml-file (this is the generic Traefik config, which defines which ports to listen on, redirect http to https and sets up the certificate process). It also contains Authelia, which I use for authentication. This part contains nothing specific to NetAlertX.

version: '3.8'\n\nservices:\n  traefik:\n    image: traefik\n    container_name: traefik\n    command:\n      - \"--api=true\"\n      - \"--api.insecure=true\"\n      - \"--api.dashboard=true\"\n      - \"--entrypoints.web.address=:80\"\n      - \"--entrypoints.web.http.redirections.entryPoint.to=websecure\"\n      - \"--entrypoints.web.http.redirections.entryPoint.scheme=https\"\n      - \"--entrypoints.websecure.address=:443\"\n      - \"--providers.file.filename=/traefik-config/dynamic.toml\"\n      - \"--providers.file.watch=true\"\n      - \"--log.level=ERROR\"\n      - \"--certificatesresolvers.section31.acme.email=postmaster@domain.com\"\n      - \"--certificatesresolvers.section31.acme.storage=/traefik-config/acme.json\"\n      - \"--certificatesresolvers.section31.acme.httpchallenge=true\"\n      - \"--certificatesresolvers.section31.acme.httpchallenge.entrypoint=web\"\n    ports:\n      - \"80:80\"\n      - \"443:443\"\n      - \"8080:8080\"\n    volumes:\n      - \"/var/run/docker.sock:/var/run/docker.sock:ro\"\n      - /appl/docker/traefik/config:/traefik-config\n    depends_on:\n      - authelia\n    restart: unless-stopped\n  authelia:\n    container_name: authelia\n    image: authelia/authelia:latest\n    ports:\n      - \"9091:9091\"\n    volumes:\n      - /appl/docker/authelia:/config\n    restart: u\n    nless-stopped\n

Snippet of the dynamic.toml file (referenced in the yml-file above) that defines the config for NetAlertX: The following are self-defined keywords, everything else is traefik keywords: - netalertx-router - netalertx-service - auth - netalertx-stripprefix

[http.routers]\n  [http.routers.netalertx-router]\n    entryPoints = [\"websecure\"]\n    rule = \"Host(`www.domain.com`) && PathPrefix(`/netalertx`)\"\n    service = \"netalertx-service\"\n    middlewares = \"auth,netalertx-stripprefix\"\n    [http.routers.netalertx-router.tls]\n       certResolver = \"section31\"\n       [[http.routers.netalertx-router.tls.domains]]\n         main = \"www.domain.com\"\n\n[http.services]\n  [http.services.netalertx-service]\n    [[http.services.netalertx-service.loadBalancer.servers]]\n      url = \"http://internal-ip-address:20211/\"\n\n[http.middlewares]\n  [http.middlewares.auth.forwardAuth]\n    address = \"http://authelia:9091/api/verify?rd=https://www.domain.com/authelia/\"\n    trustForwardHeader = true\n    authResponseHeaders = [\"Remote-User\", \"Remote-Groups\", \"Remote-Name\", \"Remote-Email\"]\n  [http.middlewares.netalertx-stripprefix.stripprefix]\n    prefixes = \"/netalertx\"\n    forceSlash = false\n\n

To make NetAlertX work with this setup I modified the default file at /etc/nginx/sites-available/default in the docker container by copying it to my local filesystem, adding the changes as specified by cvc90 and mounting the new file into the docker container, overwriting the original one. By mapping the file instead of changing the file in-place, the changes persist if an updated dockerimage is pulled. This is also a downside when the default file is updated, so I only use this as a temporary solution, until the dockerimage is updated with this change.

Default-file:

server {\n    listen 80 default_server;\n    root /var/www/html;\n    index index.php;\n    #rewrite /netalertx/(.*) / permanent;\n    add_header X-Forwarded-Prefix \"/netalertx\" always;\n    proxy_set_header X-Forwarded-Prefix \"/netalertx\";\n\n  location ~* \\.php$ {\n    fastcgi_pass unix:/run/php/php8.2-fpm.sock;\n    include         fastcgi_params;\n    fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;\n    fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;\n    fastcgi_connect_timeout 75;\n          fastcgi_send_timeout 600;\n          fastcgi_read_timeout 600;\n  }\n}\n

Mapping the updated file (on the local filesystem at /appl/docker/netalertx/default) into the docker container:

docker run -d --rm --network=host \\\n  --name=netalertx \\\n  -v /appl/docker/netalertx/config:/app/config \\\n  -v /appl/docker/netalertx/db:/app/db \\\n  -v /appl/docker/netalertx/default:/etc/nginx/sites-available/default \\\n  -e TZ=Europe/Amsterdam \\\n  -e PORT=20211 \\\n  ghcr.io/jokob-sk/netalertx:latest\n\n
"},{"location":"SECURITY/","title":"Security","text":""},{"location":"SECURITY/#responsibility-disclaimer","title":"\ud83e\udded Responsibility Disclaimer","text":"

NetAlertX provides powerful tools for network scanning, presence detection, and automation. However, it is up to you\u2014the deployer\u2014to ensure that your instance is properly secured.

This includes (but is not limited to): - Controlling who has access to the UI and API - Following network and container security best practices - Running NetAlertX only on networks where you have legal authorization - Keeping your deployment up to date with the latest patches

NetAlertX is not responsible for misuse, misconfiguration, or unsecure deployments. Always test and secure your setup before exposing it to the outside world.

"},{"location":"SECURITY/#securing-your-netalertx-instance","title":"\ud83d\udd10 Securing Your NetAlertX Instance","text":"

NetAlertX is a powerful network scanning and automation framework. With that power comes responsibility. It is your responsibility to secure your deployment, especially if you're running it outside a trusted local environment.

"},{"location":"SECURITY/#tldr-key-security-recommendations","title":"\u26a0\ufe0f TL;DR \u2013 Key Security Recommendations","text":""},{"location":"SECURITY/#access-control-with-vpn-or-tailscale","title":"\ud83d\udd17 Access Control with VPN (or Tailscale)","text":"

NetAlertX is designed to be run on private LANs, not the open internet.

Recommended: Use a VPN to access NetAlertX from remote locations.

"},{"location":"SECURITY/#tailscale-easy-vpn-alternative","title":"\u2705 Tailscale (Easy VPN Alternative)","text":"

Tailscale sets up a private mesh network between your devices. It's fast to configure and ideal for NetAlertX. \ud83d\udc49 Get started with Tailscale

"},{"location":"SECURITY/#web-ui-password-protection","title":"\ud83d\udd11 Web UI Password Protection","text":"

By default, NetAlertX does not require login. Before exposing the UI in any way:

  1. Enable password protection: ini SETPWD_enable_password=true SETPWD_password=your_secure_password

  2. Passwords are stored as SHA256 hashes

  3. Default password (if not changed): 123456 \u2014 change it ASAP!

To disable authenticated login, set SETPWD_enable_password=false in app.conf

"},{"location":"SECURITY/#additional-security-measures","title":"\ud83d\udd25 Additional Security Measures","text":""},{"location":"SECURITY/#docker-hardening-tips","title":"\ud83e\uddf1 Docker Hardening Tips","text":""},{"location":"SECURITY/#responsible-disclosure","title":"\ud83d\udce3 Responsible Disclosure","text":"

If you discover a vulnerability or security concern, please report it privately to:

\ud83d\udce7 jokob@duck.com

We take security seriously and will work to patch confirmed issues promptly. Your help in responsible disclosure is appreciated!

By following these recommendations, you can ensure your NetAlertX deployment is both powerful and secure.

"},{"location":"SESSION_INFO/","title":"Sessions Section in Device View","text":"

The Sessions Section provides details about a device's connection history. This data is automatically detected and cannot be edited by the user.

"},{"location":"SESSION_INFO/#key-fields","title":"Key Fields","text":"
  1. Date and Time of First Connection
  2. Description: Displays the first detected connection time for the device.
  3. Editability: Uneditable (auto-detected).
  4. Source: Automatically captured when the device is first added to the system.

  5. Date and Time of Last Connection

  6. Description: Shows the most recent time the device was online.
  7. Editability: Uneditable (auto-detected).
  8. Source: Updated with every new connection event.

  9. Offline Devices with Missing or Conflicting Data

  10. Description: Handles cases where a device is offline but has incomplete or conflicting session data (e.g., missing start times).
  11. Handling: The system flags these cases for review and attempts to infer missing details.
"},{"location":"SESSION_INFO/#how-sessions-are-discovered-and-calculated","title":"How Sessions are Discovered and Calculated","text":""},{"location":"SESSION_INFO/#1-detecting-new-devices","title":"1. Detecting New Devices","text":"

When a device is first detected in the network, the system logs it in the events table:

INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail) SELECT cur_MAC, cur_IP, '{startTime}', 'New Device', cur_Vendor, 1 FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Devices WHERE devMac = cur_MAC)

"},{"location":"SESSION_INFO/#2-logging-connection-sessions","title":"2. Logging Connection Sessions","text":"

When a new connection is detected, the system creates a session record:

INSERT INTO Sessions (ses_MAC, ses_IP, ses_EventTypeConnection, ses_DateTimeConnection, ses_EventTypeDisconnection, ses_DateTimeDisconnection, ses_StillConnected, ses_AdditionalInfo) SELECT cur_MAC, cur_IP, 'Connected', '{startTime}', NULL, NULL, 1, cur_Vendor FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Sessions WHERE ses_MAC = cur_MAC)

"},{"location":"SESSION_INFO/#3-handling-missing-or-conflicting-data","title":"3. Handling Missing or Conflicting Data","text":""},{"location":"SESSION_INFO/#4-updating-sessions","title":"4. Updating Sessions","text":"

The session information is then used to display the device presence under Monitoring -> Presence.

"},{"location":"SETTINGS_SYSTEM/","title":"Settings","text":""},{"location":"SETTINGS_SYSTEM/#setting-system","title":"\u2699 Setting system","text":"

This is an explanation how settings are handled intended for anyone thinking about writing their own plugin or contributing to the project.

If you are a user of the app, settings have a detailed description in the Settings section of the app. Open an issue if you'd like to clarify any of the settings.

"},{"location":"SETTINGS_SYSTEM/#data-storage","title":"\ud83d\udee2 Data storage","text":"

The source of truth for user-defined values is the app.conf file. Editing the file makes the App overwrite values in the Settings database table and in the table_settings.json file.

"},{"location":"SETTINGS_SYSTEM/#settings-database-table","title":"Settings database table","text":"

The Settings database table contains settings for App run purposes. The table is recreated every time the App restarts. The settings are loaded from the source-of-truth, that is the app.conf file. A high-level overview on the database structure can be found in the database documentation.

"},{"location":"SETTINGS_SYSTEM/#table_settingsjson","title":"table_settings.json","text":"

This is the API endpoint that reflects the state of the Settings database table. Settings can be accessed with the:

The json file is also cached on the client-side local storage of the browser.

"},{"location":"SETTINGS_SYSTEM/#appconf","title":"app.conf","text":"

Note

This is the source of truth for settings. User-defined values in this files always override default values specified in the Plugin definition.

The App generates two app.conf entries for every setting (Since version 23.8+). One entry is the setting value, the second is the __metadata associated with the setting. This __metadata entry contains the full setting definition in JSON format. Currently unused, but intended to be used in future to extend the Settings system.

"},{"location":"SETTINGS_SYSTEM/#plugin-settings","title":"Plugin settings","text":"

Note

This is the preferred way adding settings going forward. I'll be likely migrating all app settings into plugin-based settings.

Plugin settings are loaded dynamically from the config.json of individual plugins. If a setting isn't defined in the app.conf file, it is initialized via the default_value property of a setting from the config.json file. Check the Plugins documentation, section \u2699 Setting object structure for details on the structure of the setting.

"},{"location":"SETTINGS_SYSTEM/#settings-process-flow","title":"Settings Process flow","text":"

The process flow is mostly managed by the initialise.py file.

The script is responsible for reading user-defined values from a configuration file (app.conf), initializing settings, and importing them into a database. It also handles plugins and their configurations.

Here's a high-level description of the code:

  1. Function Definitions:
  2. ccd: This function is used to handle user-defined settings and configurations. It takes several parameters related to the setting's name, default value, input type, options, group, and more. It saves the settings and their metadata in different lists (conf.mySettingsSQLsafe and conf.mySettings).

  3. importConfigs: This function is the main entry point of the script. It imports user settings from a configuration file, processes them, and saves them to the database.

  4. read_config_file: This function reads the configuration file (app.conf) and returns a dictionary containing the key-value pairs from the file.

  5. Importing Configuration and Initializing Settings:

  6. The importConfigs function starts by checking the modification time of the configuration file to determine if it needs to be re-imported. If the file has not been modified since the last import, the function skips the import process.

  7. The function reads the configuration file using the read_config_file function, which returns a dictionary of settings.

  8. The script then initializes various user-defined settings using the ccd function, based on the values read from the configuration file. These settings are categorized into groups such as \"General,\" \"Email,\" \"Webhooks,\" \"Apprise,\" and more.

  9. Plugin Handling:

  10. The script loads and handles plugins dynamically. It retrieves plugin configurations and iterates through each plugin.
  11. For each plugin, it extracts the prefix and settings related to that plugin and processes them similarly to other user-defined settings.
  12. It also handles scheduling for plugins with specific RUN_SCHD settings.

  13. Saving Settings to the Database:

  14. The script clears the existing settings in the database and inserts the updated settings into the database using SQL queries.

  15. Updating the API and Performing Cleanup:

  16. After importing the configurations, the script updates the API to reflect the changes in the settings.
  17. It saves the current timestamp to determine the next import time.
  18. Finally, it logs the successful import of the new configuration.
"},{"location":"SMTP/","title":"\ud83d\udce7 SMTP server guides","text":"

The SMTP plugin supports any SMTP server. Here are some commonly used services to help speed up your configuration.

Note

If you are using a self hosted SMTP server ssh into the container and verify (e.g. via ping) that your server is reachable from within the NetAlertX container. See also how to ssh into the container if you are running it as a Home Assistant addon.

"},{"location":"SMTP/#gmail","title":"Gmail","text":"
  1. Create an app password by following the instructions from Google, you need to Enable 2FA for this to work. https://support.google.com/accounts/answer/185833

  2. Specify the following settings:

    SMTP_RUN='on_notification'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True \n    SMTP_PORT=465\n    SMTP_SERVER='smtp.gmail.com'\n    SMTP_PASS='16-digit passcode from google'\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SMTP/#brevo","title":"Brevo","text":"

Brevo allows for 300 free emails per day as of time of writing.

  1. Create an account on Brevo: https://www.brevo.com/free-smtp-server/
  2. Click your name -> SMTP & API
  3. Click Generate a new SMTP key
  4. Save the details and fill in the NetAlertX settings as below.
SMTP_SERVER='smtp-relay.brevo.com'\nSMTP_PORT=587\nSMTP_SKIP_LOGIN=False\nSMTP_USER='user@email.com'\nSMTP_PASS='xsmtpsib-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxx'\nSMTP_SKIP_TLS=False\nSMTP_FORCE_SSL=False\nSMTP_REPORT_TO='some_target_email@gmail.com'\nSMTP_REPORT_FROM='NetAlertX <user@email.com>'\n
"},{"location":"SMTP/#gmx","title":"GMX","text":"
  1. Go to your GMX account https://account.gmx.com
  2. Under Security Options enable 2FA (Two-factor authentication)
  3. Under Security Options generate an Application-specific password
  4. Home -> Email Settings -> POP3 & IMAP -> Enable access to this account via POP3 and IMAP
  5. In NetAlertX specify these settings:
    SMTP_RUN='on_notification'\n    SMTP_SERVER='mail.gmx.com'\n    SMTP_PORT=465\n    SMTP_USER='gmx_email@gmx.com'\n    SMTP_PASS='<your Application-specific password>'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True\n    SMTP_SKIP_LOGIN=False\n    SMTP_REPORT_FROM='gmx_email@gmx.com' # this has to be the same email as in SMTP_USER\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SUBNETS/","title":"Subnets Configuration","text":"

You need to specify the network interface and the network mask. You can also configure multiple subnets and specify VLANs (see VLAN exceptions below).

ARPSCAN can scan multiple networks if the network allows it. To scan networks directly, the subnets must be accessible from the network where NetAlertX is running. This means NetAlertX needs to have access to the interface attached to that subnet.

Warning

If you don't see all expected devices run the following command in the NetAlertX container (replace the interface and ip mask): sudo arp-scan --interface=eth0 192.168.1.0/24

If this command returns no results, the network is not accessible due to your network or firewall restrictions (Wi-Fi Extenders, VPNs and inaccessible networks). If direct scans are not possible, check the remote networks documentation for workarounds.

"},{"location":"SUBNETS/#example-values","title":"Example Values","text":"

Note

Please use the UI to configure settings as it ensures the config file is in the correct format. Edit app.conf directly only when really necessary.

Tip

When adding more subnets, you may need to increase both the scan interval (ARPSCAN_RUN_SCHD) and the timeout (ARPSCAN_RUN_TIMEOUT)\u2014as well as similar settings for related plugins.

If the timeout is too short, you may see timeout errors in the log. To prevent the application from hanging due to unresponsive plugins, scans are canceled when they exceed the timeout limit.

To fix this: - Reduce the subnet size (e.g., change /16 to /24). - Increase the timeout (e.g., set ARPSCAN_RUN_TIMEOUT to 300 for a 5-minute timeout). - Extend the scan interval (e.g., set ARPSCAN_RUN_SCHD to */10 * * * * to scan every 10 minutes).

For more troubleshooting tips, see Debugging Plugins.

"},{"location":"SUBNETS/#explanation","title":"Explanation","text":""},{"location":"SUBNETS/#network-mask","title":"Network Mask","text":"

Example value: 192.168.1.0/24

The arp-scan time itself depends on the number of IP addresses to check.

The number of IPs to check depends on the network mask you set in the SCAN_SUBNETS setting. For example, a /24 mask results in 256 IPs to check, whereas a /16 mask checks around 65,536 IPs. Each IP takes a couple of seconds, so an incorrect configuration could make arp-scan take hours instead of seconds.

Specify the network filter, which significantly speeds up the scan process. For example, the filter 192.168.1.0/24 covers IP ranges from 192.168.1.0 to 192.168.1.255.

"},{"location":"SUBNETS/#network-interface-adapter","title":"Network Interface (Adapter)","text":"

Example value: --interface=eth0

The adapter will probably be eth0 or eth1. (Check System Info > Network Hardware, or run iwconfig in the container to find your interface name(s)).

Tip

As an alternative to iwconfig, run ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' in your container to find your interface name(s) (e.g.: eth0, eth1): bash Synology-NAS:/# ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' sit0@NONE eth1 eth0

"},{"location":"SUBNETS/#vlans","title":"VLANs","text":"

Example value: --vlan=107

"},{"location":"SUBNETS/#vlans-on-a-hyper-v-setup","title":"VLANs on a Hyper-V Setup","text":"

Community-sourced content by mscreations from this discussion.

Tested Setup: Bare Metal \u2192 Hyper-V on Win Server 2019 \u2192 Ubuntu 22.04 VM \u2192 Docker \u2192 NetAlertX.

Approach 1 (may cause issues): Configure multiple network adapters in Hyper-V with distinct VLANs connected to each one using Hyper-V's network setup. However, this action can potentially lead to the Docker host's inability to handle network traffic correctly. This might interfere with other applications such as Authentik.

Approach 2 (working example):

Network connections to switches are configured as trunk and allow all VLANs access to the server.

By default, Hyper-V only allows untagged packets through to the VM interface, blocking VLAN-tagged packets. To fix this, follow these steps:

  1. Run the following command in PowerShell on the Hyper-V machine:

powershell Set-VMNetworkAdapterVlan -VMName <Docker VM Name> -Trunk -NativeVlanId 0 -AllowedVlanIdList \"<comma separated list of vlans>\"

  1. Within the VM, set up sub-interfaces for each VLAN to enable scanning. On Ubuntu 22.04, Netplan can be used. In /etc/netplan/00-installer-config.yaml, add VLAN definitions:

yaml network: ethernets: eth0: dhcp4: yes vlans: eth0.2: id: 2 link: eth0 addresses: [ \"192.168.2.2/24\" ] routes: - to: 192.168.2.0/24 via: 192.168.1.1

  1. Run sudo netplan apply to activate the interfaces for scanning in NetAlertX.

In this case, use 192.168.2.0/24 --interface=eth0.2 in NetAlertX.

"},{"location":"SUBNETS/#vlan-support-exceptions","title":"VLAN Support & Exceptions","text":"

Please note the accessibility of macvlans when configured on the same computer. This is a general networking behavior, but feel free to clarify via a PR/issue.

"},{"location":"SYNOLOGY_GUIDE/","title":"Installation on a Synology NAS","text":"

There are different ways to install NetAlertX on a Synology, including SSH-ing into the machine and using the command line. For this guide, we will use the Project option in Container manager.

"},{"location":"SYNOLOGY_GUIDE/#create-the-folder-structure","title":"Create the folder structure","text":"

The folders you are creating below will contain the configuration and the database. Back them up regularly.

  1. Create a parent folder named netalertx
  2. Create a db sub-folder

  1. Create a config sub-folder

  1. Note down the folders Locations:

  1. Open Container manager -> Project and click Create.
  2. Fill in the details:

  3. Project name: netalertx

  4. Path: /app_storage/netalertx (will differ from yours)
  5. Paste in the following template:
version: \"3\"\nservices:\n  netalertx:\n    container_name: netalertx\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\" \n    image: \"ghcr.io/jokob-sk/netalertx:latest\"      \n    network_mode: \"host\"        \n    restart: unless-stopped\n    volumes:\n      - local/path/config:/app/config\n      - local/path/db:/app/db      \n      # (optional) useful for debugging if you have issues setting up the container\n      - local/path/logs:/app/log\n    environment:\n      - TZ=Europe/Berlin      \n      - PORT=20211\n

  1. Replace the paths to your volume and comment out unnecessary line(s):

  2. This is only an example, your paths will differ.

 volumes:\n      - /volume1/app_storage/netalertx/config:/app/config\n      - /volume1/app_storage/netalertx/db:/app/db      \n      # (optional) useful for debugging if you have issues setting up the container\n      # - local/path/logs:/app/log <- commented out with # \u26a0\n

  1. (optional) Change the port number from 20211 to an unused port if this port is already used.
  2. Build the project:

  1. Navigate to <Synology URL>:20211 (or your custom port).
  2. Read the Subnets and Plugins docs to complete your setup.
"},{"location":"UPDATES/","title":"Docker Update Strategies to upgrade NetAlertX","text":"

Warning

For versions prior to v25.6.7 upgrade to version v25.5.24 first (docker pull ghcr.io/jokob-sk/netalertx:25.5.24) as later versions don't support a full upgrade. Alternatively, devices and settings can be migrated manually, e.g. via CSV import.

This guide outlines approaches for updating Docker containers, usually when upgrading to a newer version of NetAlertX. Each method offers different benefits depending on the situation. Here are the methods:

You can choose any approach that fits your workflow.

In the examples I assume that the container name is netalertx and the image name is netalertx as well.

Note

See also Backup strategies to be on the safe side.

"},{"location":"UPDATES/#1-manual-updates","title":"1. Manual Updates","text":"

Use this method when you need precise control over a single container or when dealing with a broken container that needs immediate attention. Example Commands

To manually update the netalertx container, stop it, delete it, remove the old image, and start a fresh one with docker-compose.

# Stop the container\nsudo docker container stop netalertx\n\n# Remove the container\nsudo docker container rm netalertx\n\n# Remove the old image\nsudo docker image rm netalertx\n\n# Pull and start a new container\nsudo docker-compose up -d\n
"},{"location":"UPDATES/#alternative-force-pull-with-docker-compose","title":"Alternative: Force Pull with Docker Compose","text":"

You can also use --pull always to ensure Docker pulls the latest image before starting the container:

sudo docker-compose up --pull always -d\n
"},{"location":"UPDATES/#2-dockcheck-for-bulk-container-updates","title":"2. Dockcheck for Bulk Container Updates","text":"

Always check the Dockcheck docs if encountering issues with the guide below.

Dockcheck is a useful tool if you have multiple containers to update and some flexibility for handling potential issues that might arise during mass updates. Dockcheck allows you to inspect each container and decide when to update.

"},{"location":"UPDATES/#example-workflow-with-dockcheck","title":"Example Workflow with Dockcheck","text":"

You might use Dockcheck to:

Dockcheck can help streamline bulk updates, especially if you\u2019re managing multiple containers.

Below is a script I use to run an update of the Dockcheck script and start a check for new containers:

cd /path/to/Docker &&\nrm dockcheck.sh &&\nwget https://raw.githubusercontent.com/mag37/dockcheck/main/dockcheck.sh &&\nsudo chmod +x dockcheck.sh &&\nsudo ./dockcheck.sh\n
"},{"location":"UPDATES/#3-automated-updates-with-watchtower","title":"3. Automated Updates with Watchtower","text":"

Always check the watchtower docs if encountering issues with the guide below.

Watchtower monitors your Docker containers and automatically updates them when new images are available. This is ideal for ongoing updates without manual intervention.

"},{"location":"UPDATES/#setting-up-watchtower","title":"Setting Up Watchtower","text":""},{"location":"UPDATES/#1-pull-the-watchtower-image","title":"1. Pull the Watchtower Image:","text":"
docker pull containrrr/watchtower\n
"},{"location":"UPDATES/#2-run-watchtower-to-update-all-images","title":"2. Run Watchtower to update all images:","text":"
docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower \\\n  --interval 300 # Check for updates every 5 minutes\n
"},{"location":"UPDATES/#3-run-watchtower-to-update-only-netalertx","title":"3. Run Watchtower to update only NetAlertX:","text":"

You can specify which containers to monitor by listing them. For example, to monitor netalertx only:

docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower netalertx\n\n
"},{"location":"UPDATES/#4-portainer-controlled-image","title":"4. Portainer controlled image","text":"

This assumes you're using Portainer to manage Docker (or Docker Swarm) and want to pull the latest version of an image and redeploy the container.

Note

"},{"location":"UPDATES/#41-steps-to-update-an-image-in-portainer-standalone-docker","title":"4.1 Steps to Update an Image in Portainer (Standalone Docker)","text":"
  1. Login to Portainer.
  2. Go to \"Containers\" in the left sidebar.
  3. Find the container you want to update, click its name.
  4. Click \"Recreate\" (top right).
  5. Tick: Pull latest image (this ensures Portainer fetches the newest version from Docker Hub or your registry).
  6. Click \"Recreate\" again.
  7. Wait for the container to be stopped, removed, and recreated with the updated image.
"},{"location":"UPDATES/#42-for-docker-swarm-services","title":"4.2 For Docker Swarm Services","text":"

If you're using Docker Swarm (under \"Stacks\" or \"Services\"):

  1. Go to \"Stacks\".
  2. Select the stack managing the container.
  3. Click \"Editor\" (or \"Update the Stack\").
  4. Add a version tag or use :latest if your image tag is latest (not recommended for production).
  5. Click \"Update the Stack\". \u26a0 Portainer will not pull the new image unless the tag changes OR the stack is forced to recreate.
  6. If image tag hasn't changed, go to \"Services\", find the service, and click \"Force Update\".
"},{"location":"UPDATES/#summary","title":"Summary","text":"Method Type Pros Cons Manual CLI Full control, no dependencies Tedious for many containers Dockcheck CLI Script Great for batch updates Needs setup, semi-automated Watchtower Daemonized Fully automated updates Less control, version drift Portainer UI Easy via web interface No auto-updates

These approaches allow you to maintain flexibility in how you update Docker containers, depending on the urgency and scale of the update.

"},{"location":"VERSIONS/","title":"Versions","text":""},{"location":"VERSIONS/#am-i-running-the-latest-released-version","title":"Am I running the latest released version?","text":"

Since version 23.01.14 NetAlertX uses a simple timestamp-based version check to verify if a new version is available. You can check the current and past releases here, or have a look at what I'm currently working on.

If you are not on the latest version, the app will notify you, that a new released version is avialable the following way:

"},{"location":"VERSIONS/#via-email-on-a-notification-event","title":"\ud83d\udce7 Via email on a notification event","text":"

If any notification occurs and an email is sent, the email will contain a note that a new version is available. See the sample email below:

"},{"location":"VERSIONS/#in-the-ui","title":"\ud83c\udd95 In the UI","text":"

In the UI via a notification Icon and via a custom message in the Maintenance section.

For a comparison, this is how the UI looks like if you are on the latest stable image:

"},{"location":"VERSIONS/#implementation-details","title":"Implementation details","text":"

During build a /app/front/buildtimestamp.txt file is created. The app then periodically checks if a new release is available with a newer timestamp in GitHub's rest-based JSON endpoint (check the def isNewVersion: method for details).

"},{"location":"WEBHOOK_N8N/","title":"Webhooks (n8n)","text":""},{"location":"WEBHOOK_N8N/#create-a-simple-n8n-workflow","title":"Create a simple n8n workflow","text":"

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

N8N can be used for more advanced conditional notification use cases. For example, you want only to get notified if two out of a specified list of devices is down. Or you can use other plugins to process the notifiations further. The below is a simple example of sending an email on a webhook.

"},{"location":"WEBHOOK_N8N/#specify-your-email-template","title":"Specify your email template","text":"

See sample JSON if you want to see the JSON paths used in the email template below

Events count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"events\"].length }}\nNew devices count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"new_devices\"].length }}\n
"},{"location":"WEBHOOK_N8N/#get-your-webhook-in-n8n","title":"Get your webhook in n8n","text":""},{"location":"WEBHOOK_N8N/#configure-netalertx-to-point-to-the-above-url","title":"Configure NetAlertX to point to the above URL","text":""},{"location":"WEBHOOK_SECRET/","title":"Webhook Secrets","text":"

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

"},{"location":"WEBHOOK_SECRET/#how-does-the-signing-work","title":"How does the signing work?","text":"

NetAlertX will use the configured secret to create a hash signature of the request body. This SHA256-HMAC signature will appear in the X-Webhook-Signature header of each request to the webhook target URL. You can use the value of this header to validate the request was sent by NetAlertX.

"},{"location":"WEBHOOK_SECRET/#activating-webhook-signatures","title":"Activating webhook signatures","text":"

All you need to do in order to add a signature to the request headers is to set the WEBHOOK_SECRET config value to a non-empty string.

"},{"location":"WEBHOOK_SECRET/#validating-webhook-deliveries","title":"Validating webhook deliveries","text":"

There are a few things to keep in mind when validating the webhook delivery:

"},{"location":"WEBHOOK_SECRET/#testing-the-webhook-payload-validation","title":"Testing the webhook payload validation","text":"

You can use the following secret and payload to verify that your implementation is working correctly.

secret: 'this is my secret'

payload: '{\"test\":\"this is a test body\"}'

If your implementation is correct, the signature you generated should match the following:

signature: bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

X-Webhook-Signature: sha256=bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

"},{"location":"WEBHOOK_SECRET/#more-information","title":"More information","text":"

If you want to learn more about webhook security, take a look at GitHub's webhook documentation.

You can find examples for validating a webhook delivery here.

"},{"location":"WEB_UI_PORT_DEBUG/","title":"Debugging inaccessible UI","text":"

The application uses the following default ports:

The Web UI is served by an nginx server, while the API backend runs on a Flask (Python) server.

"},{"location":"WEB_UI_PORT_DEBUG/#changing-ports","title":"Changing Ports","text":"

For more information, check the Docker installation guide.

"},{"location":"WEB_UI_PORT_DEBUG/#possible-issues-and-troubleshooting","title":"Possible issues and troubleshooting","text":"

Follow all of the below in order to disqualify potential causes of issues and to troubleshoot these problems faster.

"},{"location":"WEB_UI_PORT_DEBUG/#1-port-conflicts","title":"1. Port conflicts","text":"

When opening an issue or debugging:

  1. Include a screenshot of what you see when accessing HTTP://<your rpi IP>/20211 (or your custom port)
  2. Follow steps 1, 2, 3, 4 on this page
  3. Execute the following in the container to see the processes and their ports and submit a screenshot of the result:
  4. sudo apk add lsof
  5. sudo lsof -i
  6. Try running the nginx command in the container:
  7. if you get nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) try using a different port number

"},{"location":"WEB_UI_PORT_DEBUG/#2-javascript-issues","title":"2. JavaScript issues","text":"

Check for browser console (F12 browser dev console) errors + check different browsers.

"},{"location":"WEB_UI_PORT_DEBUG/#3-clear-the-app-cache-and-cached-javascript-files","title":"3. Clear the app cache and cached JavaScript files","text":"

Refresh the browser cache (usually shoft + refresh), try a private window, or different browsers. Please also refresh the app cache by clicking the \ud83d\udd03 (reload) button in the header of the application.

"},{"location":"WEB_UI_PORT_DEBUG/#4-disable-proxies","title":"4. Disable proxies","text":"

If you have any reverse proxy or similar, try disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#5-disable-your-firewall","title":"5. Disable your firewall","text":"

If you are using a firewall, try to temporarily disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#6-post-your-docker-start-details","title":"6. Post your docker start details","text":"

If you haven't, post your docker compose/run command.

"},{"location":"WEB_UI_PORT_DEBUG/#7-check-for-errors-in-your-phpnginx-error-logs","title":"7. Check for errors in your PHP/NGINX error logs","text":"

In the container execute and investigate:

cat /var/log/nginx/error.log

cat /app/log/app.php_errors.log

"},{"location":"WEB_UI_PORT_DEBUG/#8-make-sure-permissions-are-correct","title":"8. Make sure permissions are correct","text":"

Tip

You can try to start the container without mapping the /app/config and /app/db dirs and if the UI shows up then the issue is most likely related to your file system permissions or file ownership.

Please read the Permissions troubleshooting guide and provide a screesnhot of the permissions and ownership in the /app/db and app/config directories.

"},{"location":"WORKFLOWS/","title":"Workflows Overview","text":"

The workflows module in allows to automate repetitive tasks, making network management more efficient. Whether you need to assign newly discovered devices to a specific Network Node, auto-group devices from a given vendor, unarchive a device if detected online, or automatically delete devices, this module provides the flexibility to tailor the automations to your needs.

Below are a few examples that demonstrate how this module can be used to simplify network management tasks.

"},{"location":"WORKFLOWS/#updating-workflows","title":"Updating Workflows","text":"

Note

In order to apply a workflow change, you must first Save the changes and then reload the application by clicking Restart server.

"},{"location":"WORKFLOWS/#workflow-components","title":"Workflow components","text":""},{"location":"WORKFLOWS/#triggers","title":"Triggers","text":"

Triggers define the event that activates a workflow. They monitor changes to objects within the system, such as updates to devices or the insertion of new entries. When the specified event occurs, the workflow is executed.

Tip

Workflows not running? Check the Workflows debugging guide how to troubleshoot triggers and conditions.

"},{"location":"WORKFLOWS/#example-trigger","title":"Example Trigger:","text":"

This trigger will activate when a Device object is updated.

"},{"location":"WORKFLOWS/#conditions","title":"Conditions","text":"

Conditions determine whether a workflow should proceed based on certain criteria. These criteria can be set for specific fields, such as whether a device is from a certain vendor, or whether it is new or archived. You can combine conditions using logical operators (AND, OR).

Tip

To better understand how to use specific Device fields, please read through the Database overview guide.

"},{"location":"WORKFLOWS/#example-condition","title":"Example Condition:","text":"

This condition checks if the device's vendor is Google. The workflow will only proceed if the condition is true.

"},{"location":"WORKFLOWS/#actions","title":"Actions","text":"

Actions define the tasks that the workflow will perform once the conditions are met. Actions can include updating fields or deleting devices.

You can include multiple actions that should execute once the conditions are met.

"},{"location":"WORKFLOWS/#example-action","title":"Example Action:","text":"

This action updates the devIsNew field to 0, marking the device as no longer new.

"},{"location":"WORKFLOWS/#examples","title":"Examples","text":"

You can find a couple of configuration examples in Workflow Examples.

Tip

Share your workflows in Discord or GitHub Discussions.

"},{"location":"WORKFLOWS_DEBUGGING/","title":"Workflows debugging and troubleshooting","text":"

Tip

Before troubleshooting, please ensure you have Debugging enabled.

Workflows are triggered by various events. These events are captured and listed in the Integrations -> App Events section of the application.

"},{"location":"WORKFLOWS_DEBUGGING/#troubleshooting-triggers","title":"Troubleshooting triggers","text":"

Note

Workflow events are processed once every 5 seconds. However, if a scan or other background tasks are running, this can cause a delay up to a few minutes.

If an event doesn't trigger a workflow as expected, check the App Events section for the event. You can filter these by the ID of the device (devMAC or devGUID).

Once you find the Event Guid and Object GUID, use them to find relevant debug entries.

Navigate to Mainetenace -> Logs where you can filter the logs based on the Event or Object GUID.

Below you can find some example app.log entries that will help you understand why a Workflow was or was not triggered.

16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Sample Device Update Workflow'\n16:27:03 [WF] self.triggered 'False' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"insert\"}' \n16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Location Change'\n16:27:03 [WF] self.triggered 'True' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"update\"}' \n16:27:03 [WF] Event with GUID '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggered the workflow 'Location Change'\n

Note how one trigger executed, but the other didn't based on different \"event_type\" values. One is \"event_type\": \"insert\", the other \"event_type\": \"update\".

Given the Event is a update event (note ...['online'], ['update'], [None]... in the event structure), the \"event_type\": \"insert\" trigger didn't execute.

"},{"location":"WORKFLOW_EXAMPLES/","title":"Workflow examples","text":"

Workflows in NetAlertX automate actions based on real-time events and conditions. Below are practical examples that demonstrate how to build automation using triggers, conditions, and actions.

"},{"location":"WORKFLOW_EXAMPLES/#example-1-un-archive-devices-if-detected-online","title":"Example 1: Un-archive devices if detected online","text":"

This workflow automatically unarchives a device if it was previously archived but has now been detected as online.

"},{"location":"WORKFLOW_EXAMPLES/#use-case","title":"\ud83d\udccb Use Case","text":"

Sometimes devices are manually archived (e.g., no longer expected on the network), but they reappear unexpectedly. This workflow reverses the archive status when such devices are detected during a scan.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Un-archive devices if detected online\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devIsArchived\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        },\n        {\n          \"field\": \"devPresentLastScan\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsArchived\",\n      \"value\": \"0\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation","title":"\ud83d\udd0d Explanation","text":"
- Trigger: Listens for updates to device records.\n- Conditions:\n    - `devIsArchived` is `1` (archived).\n    - `devPresentLastScan` is `1` (device was detected in the latest scan).\n- Action: Updates the device to set `devIsArchived` to `0` (unarchived).\n
"},{"location":"WORKFLOW_EXAMPLES/#result","title":"\u2705 Result","text":"

Whenever a previously archived device shows up during a network scan, it will be automatically unarchived \u2014 allowing it to reappear in your device lists and dashboards.

Here is your updated version of Example 2 and Example 3, fully aligned with the format and structure of Example 1 for consistency and professionalism:

"},{"location":"WORKFLOW_EXAMPLES/#example-2-assign-device-to-network-node-based-on-ip","title":"Example 2: Assign Device to Network Node Based on IP","text":"

This workflow assigns newly added devices with IP addresses in the 192.168.1.* range to a specific network node with MAC address 6c:6d:6d:6c:6c:6c.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_1","title":"\ud83d\udccb Use Case","text":"

When new devices join your network, assigning them to the correct network node is important for accurate topology and grouping. This workflow ensures devices in a specific subnet are automatically linked to the intended node.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_1","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Assign Device to Network Node Based on IP\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"insert\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devLastIP\",\n          \"operator\": \"contains\",\n          \"value\": \"192.168.1.\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devNetworkNode\",\n      \"value\": \"6c:6d:6d:6c:6c:6c\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_1","title":"\ud83d\udd0d Explanation","text":""},{"location":"WORKFLOW_EXAMPLES/#result_1","title":"\u2705 Result","text":"

New devices with IPs in the 192.168.1.* subnet are automatically assigned to the correct network node, streamlining device organization and reducing manual work.

"},{"location":"WORKFLOW_EXAMPLES/#example-3-mark-device-as-not-new-and-delete-if-from-google-vendor","title":"Example 3: Mark Device as Not New and Delete If from Google Vendor","text":"

This workflow automatically marks newly detected Google devices as not new and deletes them immediately.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_2","title":"\ud83d\udccb Use Case","text":"

You may want to automatically clear out newly detected Google devices (such as Chromecast or Google Home) if they\u2019re not needed in your device database. This workflow handles that clean-up automatically.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_2","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Mark Device as Not New and Delete If from Google Vendor\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devVendor\",\n          \"operator\": \"contains\",\n          \"value\": \"Google\"\n        },\n        {\n          \"field\": \"devIsNew\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsNew\",\n      \"value\": \"0\"\n    },\n    {\n      \"type\": \"delete_device\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_2","title":"\ud83d\udd0d Explanation","text":""},{"location":"WORKFLOW_EXAMPLES/#result_2","title":"\u2705 Result","text":"

Any newly detected Google devices are cleaned up instantly \u2014 first marked as not new, then deleted \u2014 helping you avoid clutter in your device records.

"}]}