# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
- Updated `get_source_for_field_update_with_value` to determine source values based on new field values, including handling for empty and unknown values.
- Introduced `get_overwrite_sql_clause` to build SQL conditions for authoritative overwrite checks based on plugin settings.
- Enhanced `update_devices_data_from_scan` to utilize new authoritative settings and conditions for updating device fields.
- Added new tests for source value determination and device creation to ensure proper handling of source fields.
- Created in-memory SQLite database fixtures for testing device creation and updates.
New Features:
- API endpoints now support comprehensive input validation with detailed error responses via Pydantic models.
- OpenAPI specification endpoint (/openapi.json) and interactive Swagger UI documentation (/docs) now available for API discovery.
- Enhanced MCP session lifecycle management with create, retrieve, and delete operations.
- Network diagnostic tools: traceroute, nslookup, NMAP scanning, and network topology viewing exposed via API.
- Device search, filtering by status (including 'offline'), and bulk operations (copy, delete, update).
- Wake-on-LAN functionality for remote device management.
- Added dynamic tool disablement and status reporting.
Bug Fixes:
- Fixed get_tools_status in registry to correctly return boolean values instead of None for enabled tools.
- Improved error handling for invalid API inputs with standardized validation responses.
- Fixed OPTIONS request handling for cross-origin requests.
Refactoring:
- Significant refactoring of api_server_start.py to use decorator-based validation (@validate_request).
- Added details for NATIVE_SPEEDTEST_PATH to the README under 'Usage'.
- Explained default behavior and included examples for overriding the binary location.
- Added a verbose log to print the binary path when the plugin starts up.
- Introduce native Ookla Speedtest binary support for Gigabit connections
- Add intelligent engine detection with automatic fallback to python-cli version
- Map full JSON payload to Watched_Value3 for n8n integration
- Add Spanish (es_es) localizations and update README instructions
This script generates a synthetic CSV inventory of NetAlertX devices, including routers, switches, APs, and leaf nodes with random but reproducible attributes.
./generate_device_inventory.py --help main
usage: generate_device_inventory.py [-h] [--output OUTPUT] [--seed SEED] [--devices DEVICES] [--switches SWITCHES] [--aps APS] [--site SITE] [--ssid SSID] [--owner OWNER] [--network NETWORK] [--template TEMPLATE]
Generate a synthetic device CSV for NetAlertX
options:
-h, --help show this help message and exit
--output OUTPUT, -o OUTPUT
Output CSV path
--seed SEED Seed for reproducible output
--devices DEVICES Number of leaf nodes to generate
--switches SWITCHES Number of switches under the router
--aps APS Number of APs under switches
--site SITE Site name
--ssid SSID SSID placeholder
--owner OWNER Owner name for devices
--network NETWORK IPv4 network to draw addresses from (must have enough hosts for requested devices)
--template TEMPLATE Optional CSV to pull header from; defaults to the sample inventory layout
Uses the new run_docker_tests.sh script which is self-contained and handles all dependencies and test execution within a Docker container. This ensures that the CI environment is consistent with the local devcontainer environment.
Fixes an issue where the job name 'test' was considered invalid. Renamed to 'docker-tests'.
Ensures that tests marked as 'feature_complete' are also excluded from the test run.
Introduces a comprehensive script to build, run, and test NetAlertX within a Dockerized devcontainer environment, replicating the setup defined in . This script ensures consistency for CI/CD pipelines and local development.
The script addresses several environmental challenges:
- Properly builds the Docker image.
- Starts the container with necessary capabilities and host-gateway.
- Installs Python test dependencies (, , ) into the virtual environment.
- Executes the script to initialize services.
- Implements a healthcheck loop to wait for services to become fully operational before running tests.
- Configures to use a writable cache directory () to avoid permission issues.
- Includes a workaround to insert a dummy 'internet' device into the database, resolving a flakiness in caused by its reliance on unpredictable database state without altering any project code.
This script ensures a green test suite, making it suitable for automated testing in environments like GitHub Actions.
## Problem
PR #1182 introduced SafeConditionBuilder to prevent SQL injection, but it only
supported single-clause conditions. This broke notification filters using multiple
AND/OR clauses, causing user filters like:
`AND devLastIP NOT LIKE '192.168.50.%' AND devLastIP NOT LIKE '192.168.60.%'...`
to be rejected with "Unsupported condition pattern" errors.
## Root Cause
The `_parse_condition()` method used regex patterns that only matched single
conditions. When multiple clauses were chained, the entire string failed to match
any pattern and was rejected for security.
## Solution
Enhanced SafeConditionBuilder with compound condition support:
1. **Added `_is_compound_condition()`** - Detects multiple logical operators
while respecting quoted strings
2. **Added `_parse_compound_condition()`** - Splits compound conditions into
individual clauses and parses each one
3. **Added `_split_by_logical_operators()`** - Intelligently splits on AND/OR
while preserving operators in quoted strings
4. **Refactored `_parse_condition()`** - Routes to compound or single parser
5. **Created `_parse_single_condition()`** - Handles individual clauses (from
original `_parse_condition` logic)
## Testing
- Added comprehensive test suite (19 tests, 100% passing)
- Tested user's exact failing filter (6 AND clauses with NOT LIKE)
- Verified backward compatibility with single conditions
- Validated security (SQL injection attempts still blocked)
- Tested edge cases (mixed AND/OR, whitespace, empty conditions)
## Impact
- ✅ Fixes reported issue #1210
- ✅ Maintains all security protections from PR #1182
- ✅ Backward compatible with existing single-clause filters
- ✅ No breaking changes to API
Fixes#1210🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
is the schedule input is incorrect, an error message is logged and the plugin will NOT run.
Creating a dummy schedule would throw the system out of balance as there's the danger of schedules running out of sync.
`setup.sh` and `start.sh` combined into a single script
netalertx now starts and runs via systemd unit, can be started, stopped and restarted
`systemctl start netalertx`
`systemctl stop netalertx`
`systemctl status netalertx`
etc
Logs to `journalctl` and output can be followed with `journalctl -f`
Amalgamated chmods
tuned chmods based on earlier feedback and discussion
install script accepts command line parameter:
- 'install' to continue and DELETE ALL!
- 'update' to just update from GIT (keeps your db and settings)
- 'start' to do nothing, leave install as-is (just run the start script, set up services etc)
Please have a look, comments welcome :-)
The stdout and stderr are useful logs when debugging and trying to figure out why plugin output is causing backend to stop and exception. This commit enables output redirection to `/app/stdout.log` and `/app/stderr.log` from the backend. This may need backporting to production as it appears the fields are unused in the backend.
Additionally, when searching logs in the UI, the old logs appear first and your search results will invariably find old information when searching with ctrl-f-"string"-enter. So upon backend start and to keep them relevant, the stdout, stderr, and app logs are cleared.
- Added build_condition method to SafeConditionBuilder for structured conditions
- Fixed test_multiple_conditions_valid to test single conditions (more secure)
- Fixed test_build_condition tests by implementing the missing method
- Updated documentation to be more concise and human-friendly
- All 19 security tests now passing
- All SQL injection vectors properly blocked
Test Results:
✅ 19/19 tests passing
✅ All SQL injection attempts blocked
✅ Parameter binding working correctly
✅ Whitelist validation effective
The implementation provides comprehensive protection while maintaining
usability and backward compatibility.
avoiding repeat code in notification_instance.
Still a refactor would be great as the plugins_events table is getting filled in plugin.py and thus should be cleared in there.
Added some of the hand picked suggestions, including some outside of the previous changes.
Some will improve documentation, some readability and some will affect performance.
Adds bare metal installer for ubuntu. Tested with version 24.04. You may want to or have to change the PHPVERSION variable in the start script for other versions
Adds templates for enhancements to differentiate enhancing existing features and adding whole new ones.
Refactor/Code quality is mostly for dev/contributor use for doc purposes.
Security report is essential and also directs them to reach out with sensitive details directly
Translation requests added to allow additional accessibility to be requested as-needed and to allow prioritization based on need.
Add Table of Contents
Add Quick Start guide for Docker and Home Assistant
Fix typo in line 67 (was 33) lits -> list
Add Security & Privacy section
Add FAQ
Add Known Issues
Adds a new GitHub issue template for reporting documentation-related suggestions, inconsistencies, or improvements.
This template helps contributors provide clear, categorized feedback on docs, making it easier to track and prioritize structural or content-related issues separately from codebase bugs or feature requests.
Includes fields for:
- Affected document/section
- Description of the issue
- Proposed solution
- Type of documentation issue
- Optional implementation offer
Helps improve overall clarity, uniformity, and contributor experience with documentation.
This patch improves the resilience of the guess_icon function by sanitizing mac, vendor, and name fields to avoid crashes caused by unexpected data types (e.g., numeric hostnames).
Specifically:
mac is now cast to a string before being uppercased, with a newly added fallback to "00:00:00:00:00:00" if empty or invalid.
vendor is sanitized to a string before lowercasing, still defaulting to "unknown".
name is cast to a string before lowercasing, still falling back to "(unknown)" when empty.
This change not only resolves the error caused by numeric-only hostnames (which triggered an AttributeError due to calling .lower() on an int), but also proactively prevents similar crashes from malformed or unexpected input in the future.
References: Fixes issue #1088 and also let's me sleep a little easier tonight.
This devcontainer replicates the production container as closely as practical, with a few development-oriented differences.
Key behavior
- No init process: Services are managed by shell scripts using killall, setsid, and nohup. Startup and restarts are script-driven rather than supervised by an init system.
- Autogenerated Dockerfile: The effective devcontainer Dockerfile is generated on demand by `.devcontainer/scripts/generate-dockerfile.sh`. It combines the root `Dockerfile` (with certain COPY instructions removed) and an extra "devcontainer" stage from `.devcontainer/resources/devcontainer-Dockerfile`. When you change the resource Dockerfile, re-run the generator to refresh `.devcontainer/Dockerfile`.
- Where to put setup: Prefer baking setup into `.devcontainer/resources/devcontainer-Dockerfile`. Use `.devcontainer/scripts/setup.sh` only for steps that must happen at container start (e.g., cleaning up nginx/php ownership, creating directories, touching runtime files) or depend on runtime paths.
Debugging (F5)
The Frontend and backend run in debug mode always. You can attach your debugger at any time.
- Python Backend Debug: Attach - The backend runs with a debugger on port 5678. Set breakpoints in the code and press F5 to begin triggering them.
- PHP Frontend (XDebug) Xdebug listens on 9003. Start listening and use an Xdebug extension in your browser to debug PHP.
Common workflows (F1->Tasks: Run Task)
- Regenerate the devcontainer Dockerfile: Run the VS Code task "Generate Dockerfile" or execute `.devcontainer/scripts/generate-dockerfile.sh`. The result is `.devcontainer/Dockerfile`.
- Re-run startup provisioning: Use the task "Re-Run Startup Script" to execute `.devcontainer/scripts/setup.sh` in the container.
- Start services:
- Backend (GraphQL/Flask): `.devcontainer/scripts/restart-backend.sh` starts it under debugpy and logs to `/app/log/app.log`
- Frontend (nginx + PHP-FPM): Started via setup.sh; can be restarted by the task "Start Frontend (nginx and PHP-FPM)".
"Workspace Instructions":"printf '\n\n<> DevContainer Ready! Starting Services...\n\n📁 To access /tmp folders in the workspace:\n File → Open Workspace from File → NetAlertX.code-workspace\n\n📖 See .devcontainer/WORKSPACE.md for details\n\n'"
},
"postStartCommand":{
"Build test-container":"echo To speed up tests, building test container in background... && setsid docker buildx build -t netalertx-test . > /tmp/build.log 2>&1 && echo '🧪 Unit Test Docker image built: netalertx-test' &",
description: Guide for identifying, managing, and running commands within the NetAlertX development container. Use this when asked to run commands, testing, setup scripts, or troubleshoot container issues.
---
# Devcontainer Management
When starting a session or performing tasks requiring the runtime environment, you must identify and use the active development container.
## Finding the Container
Run `docker ps` to list running containers. Look for an image name containing `vsc-netalertx` or similar.
- **If no container is found:** Inform the user. You cannot run integration tests or backend logic without it.
- **If multiple containers are found:** Ask the user to clarify which one to use (e.g., provide the Container ID).
## Running Commands in the Container
Prefix commands with `docker exec <CONTAINER_ID>` to run them inside the environment. Use the scripts in `/services/` to control backend and other processes.
description: Enables live interaction with the NetAlertX runtime. This skill configures the Model Context Protocol (MCP) connection, granting full API access for debugging, troubleshooting, and real-time operations including database queries, network scans, and device management.
---
# MCP Activation Skill
This skill configures the NetAlertX development environment to expose the Model Context Protocol (MCP) server to AI agents.
## Why use this?
By default, agents only have access to the static codebase (files). To perform dynamic actions—such as:
- **Querying the database** (e.g., getting device lists, events)
- **Validating runtime state** (e.g., checking if a fix actually works)
...you need access to the **MCP Server** running inside the container. This skill sets up the necessary authentication tokens and connection configs to bridge your agent to that live server.
## Prerequisites
1.**Devcontainer:** You must be connected to the NetAlertX devcontainer.
2.**Server Running:** The backend server must be running (to generate `app.conf` with the API token).
## Activation Steps
1.**Activate Devcontainer Skill:**
If you are not already inside the container, activate the management skill:
```text
activate_skill("devcontainer-management")
```
2. **Generate Configurations:**
Run the configuration generation script *inside* the container. This script extracts the API Token and creates the necessary settings files (`.gemini/settings.json` and `.vscode/mcp.json`).
description: Reference for the NetAlertX codebase structure, key file paths, and configuration locations. Use this when exploring the codebase or looking for specific components like the backend entry point, frontend files, or database location.
---
# Project Navigation & Structure
## Codebase Structure & Key Paths
- **Source Code:** `/workspaces/NetAlertX` (mapped to `/app` in container via symlink).
- **Backend Entry:** `server/api_server/api_server_start.py` (Flask) and `server/__main__.py`.
- **Frontend:** `front/` (PHP/JS).
- **Plugins:** `front/plugins/`.
- **Config:** `/data/config/app.conf` (runtime) or `back/app.conf` (default).
description: Read before running tests. Detailed instructions for single, standard unit tests (fast), full suites (slow), handling authentication, and obtaining the API Token. Tests must be run when a job is complete.
---
# Testing Workflow
After code is developed, tests must be run to ensure the integrity of the final result.
**Crucial:** Tests MUST be run inside the container to access the correct runtime environment (DB, Config, Dependencies).
## 0. Pre-requisites: Environment Check
Before running any tests, verify you are inside the development container:
```bash
ls -d /workspaces/NetAlertX
```
**IF** this directory does not exist, you are likely on the host machine. You **MUST** immediately activate the `devcontainer-management` skill to enter the container or run commands inside it.
```text
activate_skill("devcontainer-management")
```
## 1. Full Test Suite (MANDATORY DEFAULT)
Unless the user **explicitly** requests "fast" or "quick" tests, you **MUST** run the full test suite. **Do not** optimize for time. Comprehensive coverage is the priority over speed.
```bash
cd /workspaces/NetAlertX; pytest test/
```
## 2. Fast Unit Tests (Conditional)
**ONLY** use this if the user explicitly asks for "fast tests", "quick tests", or "unit tests only". This **excludes** slow tests marked with `docker` or `feature_complete`.
```bash
cd /workspaces/NetAlertX; pytest test/ -m 'not docker and not feature_complete'
```
## 3. Running Specific Tests
To run a specific file or folder:
```bash
cd /workspaces/NetAlertX; pytest test/<path_to_test>
```
*Example:*
```bash
cd /workspaces/NetAlertX; pytest test/api_endpoints/test_mcp_extended_endpoints.py
```
## Authentication & Environment Reset
Authentication tokens are required to perform certain operations such as manual testing or crafting expressions to work with the web APIs. After making code changes, you MUST reset the environment to ensure the new code is running and verify you have the latest `API_TOKEN`.
1.**Reset Environment:** Run the setup script inside the container.
description:Please search to see if an open or closed issue already exists for the feature you are requesting.
description:Please search to see if an open or closed issue already exists for the feature you are requesting.
options:
- label:I have searched the existing open and closed issues
required:true
@@ -32,21 +36,21 @@ body:
label:Anything else?
description:|
Links? References? Mockups? Anything that will give us more context about the feature you are encountering!
Tip:You can attach images or log files by clicking this area to highlight it and then dragging files in.
validations:
required:true
- type:checkboxes
attributes:
label:Am I willing to test this? 🧪
description:I rely on the community to test unreleased features. If you are requesting a feature, please be willing to test it within 48h of test request. Otherwise, the feature might be pulled from the code base.
description:I rely on the community to test unreleased features. If you are requesting a feature, please be willing to test it within 48h of test request. Otherwise, the feature might be pulled from the code base.
options:
- label:I will do my best to test this feature on the `netlertx-dev` image when requested within 48h and report bugs to help deliver a great user experience for everyone and not to break existing installations.
required:true
- type:checkboxes
attributes:
label:Can I help implement this? 👩💻👨💻
description:The maintainer will provide guidance and help. The implementer will read the PR guidelines https://github.com/jokob-sk/NetAlertX/tree/main/docs#-pull-requests-prs
label:Can I help implement this? 👩💻👨💻
description:The maintainer will provide guidance and help. The implementer will read the PR guidelines https://docs.netalertx.com/DEV_ENV_SETUP/
Logs with debug enabled (https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md) ⚠
***Generally speaking, all bug reports should have logs provided.***
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files.
Paste your `docker-compose.yml`
render:yaml
validations:
required:false
- type:checkboxes
attributes:
label:Debug enabled
description:I confirm I enabled `debug`
label:Debug or Trace enabled
description:I confirm I set `LOG_LEVEL` to `debug` or `trace`
options:
- label:I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
required:true
- type:textarea
attributes:
label:Relevant `app.log` section
value:|
```
PASTE LOG HERE. Using the triple backticks preserves format.
```
description:|
Logs with debug enabled (https://docs.netalertx.com/DEBUG_TIPS) ⚠
***Generally speaking, all bug reports should have logs provided.***
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files or send them to netalertx@gmail.com with the issue number.
validations:
required:false
- type:textarea
attributes:
label:Docker Logs
description:|
You can retrieve the logs from Portainer -> Containers -> your NetAlertX container -> Logs or by running `sudo docker logs netalertx`.
value:|
```
PASTE DOCKER LOG HERE. Using the triple backticks preserves format.
description:'When submitting an issue enable LOG_LEVEL="trace" and re-search first.'
labels:['Setup 📥']
body:
- type:markdown
attributes:
value:|
<!-- NETALERTX_TEMPLATE -->
- type:dropdown
id:installation_type
attributes:
label:What installation are you running?
options:
- Production (netalertx) 📦
- Dev (netalertx-dev) 👩💻
- Home Assistant (addon) 🏠
- Home Assistant fa (full-access addon) 🏠
- Bare-metal (community only support - Check Discord) ❗
- Proxmox (community only support - Check Discord) ❗
- Unraid (community only support - Check Discord) ❗
validations:
required:true
- type:checkboxes
attributes:
label:Did I research?
description:Please confirm you checked the usual places before opening a setup support request.
options:
- label:I have searched the docs https://docs.netalertx.com/
required:true
- label:I have searched the existing open and closed issues
required:true
- label:I confirm my SCAN_SUBNETS is configured and tested as per https://docs.netalertx.com/SUBNETS
required:true
- type:checkboxes
attributes:
label:The issue occurs in the following browsers. Select at least 2.
description:This step helps me understand if this is a cache or browser-specific issue.
options:
- label:"Firefox"
- label:"Chrome"
- label:"Other (unsupported) - PRs welcome"
- label:"N/A - This is an issue with the backend"
- type:textarea
attributes:
label:What I want to do
description:Describe what you want to achieve.
validations:
required:false
- type:textarea
attributes:
label:Relevant settings you changed
description:|
Paste a screenshot or setting values of the settings you changed.
validations:
required:false
- type:textarea
attributes:
label:docker-compose.yml
description:|
Paste your `docker-compose.yml`
render:python
validations:
required:false
- type:textarea
attributes:
label:app.log
description:|
Logs with debug enabled (https://docs.netalertx.com/DEBUG_TIPS) ⚠
***Generally speaking, all bug reports should have logs provided.***
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files.
validations:
required:false
- type:checkboxes
attributes:
label:Debug enabled
description:I confirm I enabled `debug`
options:
- label:I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
<!-- Describe the purpose of this PR in one or two sentences. Example: "This PR updates the contributor guidelines by merging frontend and backend sections." -->
---
## 📝 What’s Changed?
<!-- Briefly outline what parts of the documentation were added, changed, removed, or reorganized -->
- Combined frontend and backend development guidelines into a single file
- Updated `mkdocs.yml` to reflect new structure
- Added clarification on contribution process
- Fixed outdated links in sidebar
---
## 🔍 Related Issue(s)
<!-- Link to related issues, discussions, or context (e.g., closes #123) -->
---
## ✅ Checklist
- [ ] I followed the formatting/style of existing documentation
- [ ] I have read the [Contribution Guidelines](../../CONTRIBUTING)
- [ ] I updated `mkdocs.yml` if necessary
- [ ] I verified links and references still work
- [ ] I checked that my changes improve clarity, structure, or accuracy
- [ ] I'm open to feedback and suggestions
---
## 🙋 Additional Notes
<!-- Optional: Include anything you want reviewers to be aware of -->
You are a cynical Security Engineer and Core Maintainer of NetAlertX. Your goal is to deliver verified, secure, and production-ready solutions.
### MANDATORY BEHAVIORAL OVERRIDES
1.**Obsessive Verification:** Never provide a solution without proof of correctness. Write test cases or validation immediately after writing functions.
2.**Anti-Laziness Protocol:** No placeholders. Output full, functional blocks every time.
description: Develop and extend NetAlertX REST API endpoints. Use this when asked to create endpoint, add API route, implement API, or modify API responses.
description: Manage and troubleshoot API tokens and authentication-related secrets. Use this when you need to find, rotate, verify, or debug authentication issues (401/403) in NetAlertX.
---
# Authentication
## Purpose ✅
Explain how to locate, validate, rotate, and troubleshoot API tokens and related authentication settings used by NetAlertX.
## Pre-Flight Check (MANDATORY) ⚠️
1. Ensure the backend is running (use devcontainer services or `ps`/systemd checks).
2. Verify the `API_TOKEN` setting can be read with Python (see below).
3. If a token-related error occurs, gather logs (`/tmp/log/app.log`, nginx logs) before changing secrets.
## Retrieve the API token (Python — preferred) 🐍
Always use Python helpers to read secrets to avoid accidental exposure in shells or logs:
```python
fromhelperimportget_setting_value
token=get_setting_value("API_TOKEN")
```
If you must inspect from a running container (read-only), use:
description: Wipe and regenerate the NetAlertX database and config. Use this when asked to reset database, wipe db, fresh database, clean slate, or start fresh.
---
# Database Reset
Completely wipes devcontainer database and config, then regenerates from scratch.
## Command
```bash
killall 'python3'||true
sleep 1
rm -rf /data/db/* /data/config/*
bash /entrypoint.d/15-first-run-config.sh
bash /entrypoint.d/20-first-run-db.sh
```
## What This Does
1. Kills backend to release database locks
2. Deletes all files in `/data/db/` and `/data/config/`
description: Generate devcontainer configuration files. Use this when asked to generate devcontainer configs, update devcontainer template, or regenerate devcontainer.
---
# Devcontainer Config Generation
Generates devcontainer configs from the template. Must be run after changes to devcontainer configuration.
description: Control NetAlertX services inside the devcontainer. Use this when asked to start backend, start frontend, start nginx, start php-fpm, start crond, stop services, restart services, or check if services are running.
---
# Devcontainer Services
You operate inside the devcontainer. Do not use `docker exec`.
## Start Backend (Python)
```bash
/services/start-backend.sh
```
Backend runs with debugpy on port 5678 for debugging. Takes ~5 seconds to be ready.
description: Reprovision and reset the devcontainer environment. Use this when asked to re-run startup, reprovision, setup devcontainer, fix permissions, or reset runtime state.
---
# Devcontainer Setup
The setup script forcefully resets all runtime state. It is idempotent—every run wipes and recreates all relevant folders, symlinks, and files.
description: Build Docker images for testing or production. Use this when asked to build container, build image, docker build, build test image, or launch production container.
---
# Docker Build
## Build Unit Test Image
Required after container/Dockerfile changes. Tests won't see changes until image is rebuilt.
```bash
docker buildx build -t netalertx-test .
```
Build time: ~30 seconds (or ~90s if venv stage changes)
## Build and Launch Production Container
Before launching, stop devcontainer services first to free ports.
description: Clean up unused Docker resources. Use this when asked to prune docker, clean docker, remove unused images, free disk space, or docker cleanup. DANGEROUS operation. Requires human confirmation.
---
# Docker Prune
**DANGER:** This destroys containers, images, volumes, and networks. Any stopped container will be wiped and data will be lost.
description: Enables live interaction with the NetAlertX runtime. This skill configures the Model Context Protocol (MCP) connection, granting full API access for debugging, troubleshooting, and real-time operations including database queries, network scans, and device management.
---
# MCP Activation Skill
This skill configures the environment to expose the Model Context Protocol (MCP) server to AI agents running inside the devcontainer.
## Usage
This skill assumes you are already running within the NetAlertX devcontainer.
1.**Generate Configurations:**
Run the configuration generation script to extract the API Token and update the VS Code MCP settings.
Request the user to reload the VS Code window to activate the new tools.
> I have generated the MCP configuration. Please run the **'Developer: Reload Window'** command to activate the MCP server tools.
> In VS Code: open the Command Palette (Windows/Linux: Ctrl+Shift+P, macOS: Cmd+Shift+P), type Developer: Reload Window, press Enter — or click the Reload button if a notification appears. 🔁
> After you reload, tell me “Window reloaded” (or just “reloaded”) and I’ll continue.
## Why use this?
Access the live runtime API to perform operations that are not possible through static file analysis:
description: Create and run NetAlertX plugins. Use this when asked to create plugin, run plugin, test plugin, plugin development, or execute plugin script.
---
# Plugin Development
## Expected Workflow for Running Plugins
1. Read this skill document for context and instructions.
2. Find the plugin in `front/plugins/<code_name>/`.
3. Read the plugin's `config.json` and `script.py` to understand its functionality and settings.
4. Formulate and run the command: `python3 front/plugins/<code_name>/script.py`.
5. Retrieve the result from the plugin log folder (`/tmp/log/plugins/last_result.<PREF>.log`) quickly, as the backend may delete it after processing.
## Run a Plugin Manually
```bash
python3 front/plugins/<code_name>/script.py
```
Ensure `sys.path` includes `/app/front/plugins` and `/app/server` (as in the template).
## Plugin Structure
```text
front/plugins/<code_name>/
├── config.json # Manifest with settings
├── script.py # Main script
└── ...
```
## Manifest Location
`front/plugins/<code_name>/config.json`
-`code_name` == folder name
-`unique_prefix` drives settings and filenames (e.g., `ARPSCAN`)
## Settings Pattern
-`<PREF>_RUN`: execution phase
-`<PREF>_RUN_SCHD`: cron-like schedule
-`<PREF>_CMD`: script path
-`<PREF>_RUN_TIMEOUT`: timeout in seconds
-`<PREF>_WATCH`: columns to watch for changes
## Data Contract
Scripts write to `/tmp/log/plugins/last_result.<PREF>.log`
**Important:** The backend will almost immediately process this result file and delete it after ingestion. If you need to inspect the output, run the plugin and immediately retrieve the result file before the backend processes it.
Use `front/plugins/plugin_helper.py`:
```python
fromplugin_helperimportPlugin_Objects
plugin_objects=Plugin_Objects()
plugin_objects.add_object(...)# During processing
plugin_objects.write_result_file()# Exactly once at end
```
## Execution Phases
-`once`: runs once at startup
-`schedule`: runs on cron schedule
-`always_after_scan`: runs after every scan
-`before_name_updates`: runs before name resolution
-`on_new_device`: runs when new device detected
-`on_notification`: runs when notification triggered
description: Navigate the NetAlertX codebase structure. Use this when asked about file locations, project structure, where to find code, or key paths.
---
# Project Navigation
## Key Paths
| Component | Path |
|-----------|------|
| Workspace root | `/workspaces/NetAlertX` |
| Backend entry | `server/__main__.py` |
| API server | `server/api_server/api_server_start.py` |
| Plugin system | `server/plugin.py` |
| Initialization | `server/initialise.py` |
| Frontend | `front/` |
| Frontend JS | `front/js/common.js` |
| Frontend PHP | `front/php/server/*.php` |
| Plugins | `front/plugins/` |
| Plugin template | `front/plugins/__template` |
| Database helpers | `server/db/db_helper.py` |
| Device model | `server/models/device_instance.py` |
| Messaging | `server/messaging/` |
| Workflows | `server/workflows/` |
## Architecture
NetAlertX uses a frontend–backend architecture: the frontend runs on **PHP + Nginx** (see `front/`), the backend is implemented in **Python** (see `server/`), and scheduled tasks are managed by a **supercronic** scheduler that runs periodic jobs.
## Runtime Paths
| Data | Path |
|------|------|
| Config (runtime) | `/data/config/app.conf` |
| Config (default) | `back/app.conf` |
| Database | `/data/db/app.db` |
| API JSON cache | `/tmp/api/*.json` |
| Logs | `/tmp/log/` |
| Plugin logs | `/tmp/log/plugins/` |
## Environment Variables
Use these NETALERTX_* instead of hardcoding paths. Examples:
description: Load synthetic device data into the devcontainer. Use this when asked to load sample devices, seed data, import test devices, populate database, or generate test data.
---
# Sample Data Loading
Generates synthetic device inventory and imports it via the `/devices/import` API endpoint.
## Command
```bash
cd /workspaces/NetAlertX/.devcontainer/scripts
./load-devices.sh
```
## Environment
-`CSV_PATH`: defaults to `/tmp/netalertx-devices.csv`
## Prerequisites
- Backend must be running
- API must be accessible
## What It Does
1. Generates synthetic device records (MAC addresses, IPs, names, vendors)
description: Run and debug tests in the NetAlertX devcontainer. Use this when asked to run tests, check test failures, debug failing tests, or execute pytest.
---
# Testing Workflow
## Pre-Flight Check (MANDATORY)
Before running any tests, always check for existing failures first:
1. Use the `testFailure` tool to gather current failure information
2. Review the failures to understand what's already broken
3. Only then proceed with test execution
## Running Tests
Use VS Code's testing interface or the `runTests` tool with appropriate parameters:
- To run all tests: invoke runTests without file filter
- To run specific test file: invoke runTests with the test file path
- To run failed tests only: invoke runTests with `--lf` flag
## Test Location
Tests live in `test/` directory. App code is under `server/`.
PYTHONPATH is preconfigured to include the following which should meet all needs:
-`/app` # the primary location where python runs in the production system
-`/app/server` # symbolic link to /wprkspaces/NetAlertX/server
-`/app/front/plugins` # symbolic link to /workspaces/NetAlertX/front/plugins
-`/opt/venv/lib/pythonX.Y/site-packages`
-`/workspaces/NetAlertX/test`
-`/workspaces/NetAlertX/server`
-`/workspaces/NetAlertX`
-`/usr/lib/pythonX.Y/site-packages`
## Authentication in Tests
Retrieve `API_TOKEN` using Python (not shell):
```python
fromhelperimportget_setting_value
token=get_setting_value("API_TOKEN")
```
## Troubleshooting 403 Forbidden
1. Ensure backend is running (use devcontainer-services skill)
"detail":"Generates devcontainer configs from the template. This must be run after changes to devcontainer to combine/merge them into the final config used by VS Code. Note- this has no bearing on the production or test image.",
"detail":"DANGER! Prunes all unused Docker resources (images, containers, volumes, networks). Any stopped container will be wiped and data will be lost. Use with caution.",
"detail":"The startup script runs directly after the container is started. It reprovisions permissions, links folders, and performs other setup tasks. Run this if you have made changes to the setup script or need to reprovision the container.",
"command":"docker buildx build -t netalertx-test . && echo '🧪 Unit Test Docker image built: netalertx-test'",
"detail":"This must be run after changes to the container. Unit testing will not register changes until after this image is rebuilt. It takes about 30 seconds to build unless changes to the venv stage are made. venv takes 90s alone.",
"presentation":{
"echo":true,
"reveal":"always",
"panel":"shared",
"showReuseMessage":false,
"group":"Any"
},
"problemMatcher":[],
"group":{
"kind":"build",
"isDefault":false
},
"icon":{
"id":"beaker",
"color":"terminal.ansiBlue"
}
},
{
"label":"[Dev Container] Wipe and Regenerate Database",
"command":"docker compose up -d --build --force-recreate",
"detail":"Before launching, ensure VSCode Ports are closed and services are stopped. Tasks: Stop Frontend & Backend Services & Remote: Close Unused Forwarded Ports to ensure proper operation of the new container.",
"options":{
"cwd":"/workspaces/NetAlertX"
},
"presentation":{
"echo":true,
"reveal":"always",
"panel":"shared",
"showReuseMessage":false
},
"problemMatcher":[],
"group":{
"kind":"build",
"isDefault":false
},
"icon":{
"id":"package",
"color":"terminal.ansiBlue"
}
},
{
"label":"Analyze PR Instructions",
"type":"shell",
"command":"python3",
"detail":"Pull all of Coderabbit's suggestions from a pull request. Requires `gh auth login` first.",
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of
any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address,
without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official email address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at <jokob@duck.com>.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Ethical Use Clause (Project-Specific)
While NetAlertX is a tool designed to empower users with greater insight into their own networks, we expect and encourage all users to use this software **ethically and legally**.
- Do not use this software to scan or monitor networks without **explicit authorization**.
- Respect privacy, consent, and data protection laws applicable in your jurisdiction.
- Any use of NetAlertX for malicious surveillance, stalking, or unauthorized access is explicitly discouraged and may be grounds for removal from the community and revocation of support.
We reserve the right to take appropriate action to uphold the ethical integrity of this project.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the
[Contributor Covenant](https://www.contributor-covenant.org/), version 2.1,
The issue tracker is the preferred channel for bug reports, features requests and submitting pull requests.
Before submitting a new issue please spend a couple of minutes on research:
* Check [🛑 Common issues](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md#common-issues)
* Check [💡 Closed issues](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed) if a similar issue was solved in the past.
## Pull-requests (PRs)
If you submit a PR please do check that your changes are backward compatible with existing installations. Existing features should be always preserved.
Get visibility of what's going on on your WIFI/LAN network. Schedule scans for devices, port changes and get alerts if unknown devices or changes are found. Write your own [Plugins](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins#readme) with auto-generated UI and in-build notification system. Build out and easily maintain your network source of truth (NSoT).
<summary>❓ Why use Net<b>Alert</b><sup>x</sup>?</summary>
<hr>
Most of us don't know what's going on on our home network, but we want our family and data to be safe. _Command-line tools_ are great, but the output can be _hard to understand_ and action if you are not a network specialist.
Net<b>Alert</b><sup>x</sup> gives you peace of mind. _Visualize and immediately report 📬_ what is going on in your network - this is the first step to enhance your _network security 🔐_.
Net<b>Alert</b><sup>x</sup> combines several network and other scanning tools 🔍 with notifications 📧 into one user-friendly package 📦.
Set up a _kill switch ☠_ for your network via a smart plug with the available [Home Assistant](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HOME_ASSISTANT.md) integration. Implement custom automations with the [CSV device Exports 📤](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/csv_backup), [Webhooks](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEBHOOK_N8N.md), or [API endpoints](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) features.
Extend the app if you want to create your own scanner [Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins#readme) and handle the results and notifications in Net<b>Alert</b><sup>x</sup>.
Looking forward to your contributions if you decide to share your work with the community ❤.
Head to [https://netalertx.com/](https://netalertx.com/) for even more gifs and screenshots 📷.
</details>
## Scan Methods, Notifications, Integration, Extension system
| Features | Details |
|-------------|-------------|
| 🔍 | The app scans your network for, **New devices**, **New connections** (re-connections), **Disconnections**, **"Always Connected" devices down**, Devices **IP changes** and **Internet IP address changes**. Discovery & scan methods include: **arp-scan**, **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**. **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins#readme) docs for more info on individual scans. |
|📧 | Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/). |
|🧩 | Feed your data and device changes into [Home Assistant](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HOME_ASSISTANT.md), read [API endpoints](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md), or use [Webhooks](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEBHOOK_N8N.md) to setup custom automation flows. |
|➕ | Build your own scanners with the [Plugin system](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins#readme) |
Centralized network visibility and continuous asset discovery.
Monitor devices, detect change, and stay aware across distributed networks.
NetAlertX provides a centralized "Source of Truth" (NSoT) for network infrastructure. Maintain a real-time inventory of every connected device, identify Shadow IT and unauthorized hardware to maintain regulatory compliance, and automate compliance workflows across distributed sites.
NetAlertX is designed to bridge the gap between simple network scanning and complex SIEM tools, providing actionable insights without the overhead.
## Installation & Documentation
## Table of Contents
- [Quick Start](#quick-start)
- [Features](#features)
- [Documentation](#documentation)
- [Security \& Privacy](#security--privacy)
- [FAQ](#faq)
- [Troubleshooting Tips](#troubleshooting-tips)
- [Everything else](#everything-else)
## Quick Start
> [!WARNING]
> ⚠️ **Important:** The docker-compose has recently changed. Carefully read the [Migration guide](https://docs.netalertx.com/MIGRATION/?h=migrat#12-migration-from-netalertx-v25524) for detailed instructions.
Start NetAlertX in seconds with Docker:
```bash
docker run -d \
--network=host \
--restart unless-stopped \
-v /local_data_dir:/data \
-v /etc/localtime:/etc/localtime:ro \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700\
-e PORT=20211\
-e APP_CONF_OVERRIDE='{"GRAPHQL_PORT":"20214"}'\
ghcr.io/netalertx/netalertx:latest
```
Note: Your `/local_data_dir` should contain a `config` and `db` folder.
To deploy a containerized instance directly from the source repository, execute the following BASH sequence:
# To customize: edit docker-compose.yaml and run that last command again
```
Need help configuring it? Check the [usage guide](https://docs.netalertx.com/README) or [full documentation](https://docs.netalertx.com/).
For Home Assistant users: [Click here to add NetAlertX](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
For other install methods, check the [installation docs](#documentation)
Continuous monitoring for unauthorized asset discovery, connection state changes, and IP address management (IPAM) drift. Discovery & scan methods include: **arp-scan**, **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**, **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://docs.netalertx.com/PLUGINS#readme) docs for a full list of avaliable plugins.
### Notification gateways
Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use native [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/) publishers.
### Integrations and Plugins
Feed your data and device changes into [Home Assistant](https://docs.netalertx.com/HOME_ASSISTANT), read [API endpoints](https://docs.netalertx.com/API), or use [Webhooks](https://docs.netalertx.com/WEBHOOK_N8N) to setup custom automation flows. You can also
build your own scanners with the [Plugin system](https://docs.netalertx.com/PLUGINS#readme) in as little as [15 minutes](https://www.youtube.com/watch?v=cdbxlwiWhv8).
### Workflows
The [workflows module](https://docs.netalertx.com/WORKFLOWS) automates IT governance by enforcing device categorization and cleanup policies. Whether you need to assign newly discovered devices to a specific Network Node, auto-group devices from a given vendor, unarchive a device if detected online, or automatically delete devices, this module provides the flexibility to tailor the automations to your needs.
Get notified about a new release, what new functionality you can use and about breaking changes.
NetAlertX scans your local network and can store metadata about connected devices. By default, all data is stored **locally**. No information is sent to external services unless you explicitly configure notifications or integrations.
![Follow and star][follow_star]
Compliance & Hardening:
- Run it behind a reverse proxy with authentication
- Use firewalls to restrict access to the web UI
- Regularly update to the latest version for security patches
- Role-Based Access Control (RBAC) via Reverse Proxy: Integrate with your existing SSO/Identity provider for secure dashboard access.
### ⭐ Sponsors
See [Security Best Practices](https://github.com/netalertx/NetAlertX/security) for more details.
Thank you to all the wonderful people who are sponsoring this project (private sponsors are hidden).
## FAQ
<!-- SPONSORS-LIST DO NOT MODIFY BELOW -->
| All Sponsors |
|---|
**Q: How do I monitor VLANs or remote subnets?**
A: Ensure the container has proper network access (e.g., use `--network host` on Linux). Also check that your scan method is properly configured in the UI.
<!-- SPONSORS-LIST DO NOT MODIFY ABOVE -->
**Q: What is the recommended deployment for high-availability?**
A: We recommend deploying via Docker with persistent volume mounts for database integrity and running behind a reverse proxy for secure access.
**Q: Will this send any data to the internet?**
A: No. All scans and data remain local, unless you set up cloud-based notifications.
**Q: Can I use this without Docker?**
A: You can install the application directly on your own hardware by following the [bare metal installation guide](https://docs.netalertx.com/HW_INSTALL).
**Q: Where is the data stored?**
A: In the `/data/config` and `/data/db` folders. Back up these folders regularly.
## Troubleshooting Tips
- Some scanners (e.g. ARP) may not detect devices on different subnets. See the [Remote networks guide](https://docs.netalertx.com/REMOTE_NETWORKS) for workarounds.
- Wi-Fi-only networks may require alternate scanners for accurate detection.
- Notification throttling may be needed for large networks to prevent spam.
- On some systems, elevated permissions (like `CAP_NET_RAW`) may be needed for low-level scanning.
Check the [GitHub Issues](https://github.com/netalertx/NetAlertX/issues) for the latest bug reports and solutions and consult [the official documentation](https://docs.netalertx.com/).
- [Zabbix](https://www.zabbix.com/) or [Nagios](https://www.nagios.org/) - Strong focus on infrastructure monitoring.
- [NetAlertX](https://netalertx.com) - The streamlined, discovery-focused alternative for real-time asset intelligence.
### 💙 Donations
Thank you to everyone who appreciates this tool and donates.
<details>
<summary>Click for more ways to donate</summary>
<hr>
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
| --- | --- | --- |
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) |
@@ -106,29 +190,22 @@ Thank you to all the wonderful people who are sponsoring this project (private s
</details>
### 🙏Contributors
### 🏗 Contributors
This project would be nothing without the amazing work of the community, with special thanks to:
This project would be nothing without the amazing work of the community, with special thanks to:
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up), [vladaurosh](https://github.com/vladaurosh) for Alpine re-base help, [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few. Check out all the [amazing contributors](https://github.com/jokob-sk/NetAlertX/graphs/contributors).
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up), [vladaurosh](https://github.com/vladaurosh) for Alpine re-base help, [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few. Check out all the [amazing contributors](https://github.com/netalertx/NetAlertX/graphs/contributors).
Proudly using [Weblate](https://hosted.weblate.org/projects/pialert/).
Proudly using [Weblate](https://hosted.weblate.org/projects/pialert/). Help out and suggest languages in the [online portal of Weblate](https://hosted.weblate.org/projects/pialert/core/).
Help out and suggest languages in the [online portal of Weblate](https://hosted.weblate.org/projects/pialert/core/).
### License
> GPL 3.0 | [Read more here](LICENSE.txt) | Source of the [animated GIF (Loading Animation)](https://commons.wikimedia.org/wiki/File:Loading_Animation.gif) | Source of the [selfhosted Fonts](https://github.com/adobe-fonts/source-sans)
Head to [https://netalertx.com/](https://netalertx.com/) for more gifs and screenshots 📷.
> [!NOTE]
> There is also an experimental 🧪 [bare-metal install](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md) method available.
## 📕 Basic Usage
> [!WARNING]
> You will have to run the container on the `host` network and specify `SCAN_SUBNETS` unless you use other [plugin scanners](https://github.com/jokob-sk/NetAlertX/blob/main/front/plugins/README.md). The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.
```yaml
docker run -d --rm --network=host \
-v local_path/config:/app/config \
-v local_path/db:/app/db \
--mount type=tmpfs,target=/app/api \
-e TZ=Europe/Berlin \
-e PORT=20211 \
jokobsk/netalertx:latest
```
See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md).
### Docker environment variables
| Variable | Description | Default |
| :------------- |:-------------| -----:|
| `PORT` |Port of the web interface | `20211` |
| `LISTEN_ADDR` |Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. | `0.0.0.0` |
|`TZ` |Time zone to display stats correctly. Find your time zone [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) | `Europe/Berlin` |
|`APP_CONF_OVERRIDE` | JSON override for settings, e.g. `{"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20212"}` | `N/A` |
|`ALWAYS_FRESH_INSTALL` | If `true` will delete the content of the `/db`&`/config` folders. For testing purposes. Can be coupled with [watchtower](https://github.com/containrrr/watchtower) to have an always freshly installed `netalertx`/`netalertx-dev` image. | `N/A` |
> You can override the default GraphQL port setting `GRAPHQL_PORT` (set to `20212`) by using the `APP_CONF_OVERRIDE` env variable.
### Docker paths
> [!NOTE]
> See also [Backup strategies](https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md).
| ✅ | `:/app/config` | Folder which will contain the `app.conf`&`devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
| ✅ | `:/app/db` | Folder which will contain the `app.db` database file |
| | `:/app/log` | Logs folder useful for debugging if you have issues setting up the container |
| | `:/app/api` | A simple [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. |
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/front/plugins/README.md). |
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
> Use separate `db` and `config` directories, do not nest them.
### Initial setup
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/app/config/` folder directly
### Setting up scanners
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.
If you are running PiHole you can synchronize devices directly. Check the [PiHole configuration guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PIHOLE_GUIDE.md) for details.
> [!NOTE]
> You can bulk-import devices via the [CSV import method](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md).
#### 🧭 Community guides
You can read or watch several [community configuration guides](https://github.com/jokob-sk/NetAlertX/blob/main/docs/COMMUNITY_GUIDES.md) in Chinese, Korean, German, or French.
> Please note these might be outdated. Rely on official documentation first.
### **Common issues**
💡 Before creating a new issue, please check if a similar issue was [already resolved](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed).
⚠ Check also common issues and [debugging tips](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md).
## ❤ Support me
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
For Managed Service Providers (MSPs) and Network Operations Centers (NOC), "Eyes on Glass" monitoring requires a UI that is both self-healing (auto-refreshing) and focused only on critical data. By leveraging the **UI Settings Plugin**, you can transform NetAlertX from a management tool into a dedicated live monitor.

---
### 1. Configure Auto-Refresh for Live Monitoring
Static dashboards are the enemy of real-time response. NetAlertX allows you to force the UI to pull fresh data without manual page reloads.
* **Setting:** Locate the `UI_REFRESH` (or similar "Auto-refresh UI") setting within the **UI Settings plugin**.
* **Optimal Interval:** Set this between **60 to 120 seconds**.
* *Note:* Refreshing too frequently (e.g., <30s)onlargenetworkscanleadtohighbrowserandserverCPUusage.
> This video provides a visual walkthrough of the NetAlertX dashboard features, including how to map and visualize devices which is crucial for setting up a clear "Eyes on Glass" monitoring environment.
## ADVISORY: Best Practices for Monitoring Multiple Networks with NetAlertX
### 1. Define Monitoring Scope & Architecture
Effective multi-network monitoring starts with understanding how NetAlertX "sees" your traffic.
* **A. Understand Network Accessibility:** Local ARP-based scanning (**ARPSCAN**) only discovers devices on directly accessible subnets due to Layer 2 limitations. It cannot traverse VPNs or routed borders without specific configuration.
* **B. Plan Subnet & Scan Interfaces:** Explicitly configure each accessible segment in `SCAN_SUBNETS` with the corresponding interfaces.
* **C. Remote & Inaccessible Networks:** For networks unreachable via ARP, use these strategies:
* **Alternate Plugins:** Supplement discovery with [SNMPDSC](SNMPDSC) or [DHCP lease imports](https://docs.netalertx.com/PLUGINS/?h=DHCPLSS#available-plugins).
* **Centralized Multi-Tenant Management using Sync Nodes:** Run secondary NetAlertX instances on isolated networks and aggregate data using the **SYNC plugin**.
* **Manual Entry:** For static assets where only ICMP (ping) status is needed.
> [!TIP]
> Explore the [remote networks](./REMOTE_NETWORKS.md) documentation for more details on how to set up the approaches menationed above.
---
### 2. Automating IT Asset Inventory with Workflows
[Workflows](./WORKFLOWS.md) are the "engine" of NetAlertX, reducing manual overhead as your device list grows.
* **A. Logical Ownership & VLAN Tagging:** Create a workflow triggered on **Device Creation** to:
1. Inspect the IP/Subnet.
2. Set `devVlan` or `devOwner` custom fields automatically.
* **B. Auto-Grouping:** Use conditional logic to categorize devices.
* *Example:* If `devLastIP == 10.10.20.*`, then `Set devLocation = "BranchOffice"`.
```json
{
"name":"Assign Location - BranchOffice",
"trigger":{
"object_type":"Devices",
"event_type":"update"
},
"conditions":[
{
"logic":"AND",
"conditions":[
{
"field":"devLastIP",
"operator":"contains",
"value":"10.10.20."
}
]
}
],
"actions":[
{
"type":"update_field",
"field":"devLocation",
"value":"BranchOffice"
}
]
}
```
* **C. Sync Node Tracking:** When using multiple instances, ensure all synchub nodes have a descriptive `SYNC_node_name` name to distinguish between sites.
> [!TIP]
> Always test new workflows in a "Staging" instance. A misconfigured workflow can trigger thousands of unintended updates across your database.
---
### 3. Notification Strategy: Low Noise, High Signal
A multi-network environment can generate significant "alert fatigue." Use a layered filtering approach.
| Level | Strategy | Recommended Action |
| --- | --- | --- |
| **Device** | Silence Flapping | Use "Skip repeated notifications" for unstable IoT devices. |
| **Plugin** | Tune Watchers | Only enable `_WATCH` on reliable plugins (e.g., ICMP/SNMP). |
| **Global** | Filter Sections | Limit `NTFPRCS_INCLUDED_SECTIONS` to `new_devices` and `down_devices`. |
> [!TIP]
> **Ignore Rules:** Maintain strict **Ignored MAC** (`NEWDEV_ignored_MACs`) and **Ignored IP** (`NEWDEV_ignored_IPs`) lists for guest networks or broadcast scanners to keep your logs clean.
---
### 4. UI Filters for Multi-Network Clarity
Don't let a massive device list overwhelm you. Use the [Multi-edit features](./DEVICES_BULK_EDITING.md) to categorize devices and create focused views:
* **By Zone:** Filter by "Location", "Site" or "Sync Node" you et up in Section 2.
* **By Criticality:** Use custom the device Type field to separate "Core Infrastructure" from "Ephemeral Clients."
* **By Status:** Use predefined views specifically for "Devices currently Down" to act as a Network Operations Center (NOC) dashboard.
> [!TIP]
> If you are providing services as a Managed Service Provider (MSP) customize your default UI to be exactly how you need it, by hiding parts of the UI that you are not interested in, or by configuring a auto-refreshed screen monitoring your most important clients. See the [Eyes on glass](./ADVISORY_EYES_ON_GLASS.md) advisory for more details.
---
### 5. Operational Stability & Sync Health
* **Health Checks:** Regularly monitor the [Logs](https://docs.netalertx.com/LOGGING/?h=logs) to ensure remote nodes are reporting in.
* **Backups:** Use the **CSV Devices Backup** plugin. Standardize your workflow templates and [back up](./BACKUPS.md) you `/config` folders so that if a node fails, you can redeploy it with the same logic instantly.
### 6. Optimize Performance
As your environment grows, tuning the underlying engine is vital to maintain a snappy UI and reliable discovery cycles.
* **Plugin Scheduling:** Avoid "Scan Storms" by staggering plugin execution. Running intensive tasks like `NMAP` or `MASS_DNS` simultaneously can spike CPU and cause database locks.
* **Database Health:** Large-scale monitoring generates massive event logs. Use the **[DBCLNP (Database Cleanup)](https://www.google.com/search?q=https://docs.netalertx.com/PLUGINS/%23dbclnp)** plugin to prune old records and keep the SQLite database performant.
* **Resource Management:** For high-device counts, consider increasing the memory limit for the container and utilizing `tmpfs` for temporary files to reduce SD card/disk I/O bottlenecks.
> [!IMPORTANT]
> For a deep dive into hardware requirements, database vacuuming, and specific environment variables for high-load instances, refer to the full **[Performance Optimization Guide](https://docs.netalertx.com/PERFORMANCE/)**.
---
### Summary Checklist
* [ ]**Discovery:** Are all subnets explicitly defined?
* [ ]**Automation:** Do new devices get auto-assigned to a VLAN/Owner?
* [ ]**Noise Control:** Are transient "Down" alerts delayed via `NTFPRCS_alert_down_time`?
* [ ]**Remote Sites:** Is the SYNC plugin authenticated and heartbeat-active?
NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the `API_TOKEN` settings as authorization bearer, for example:
This API provides programmatic access to **devices, events, sessions, metrics, network tools, and sync** in NetAlertX. It is implemented as a **REST and GraphQL server**. All requests require authentication via **API Token** (`API_TOKEN` setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a `Authorization: Bearer API_TOKEN` header as per example below:
The API server runs on `0.0.0.0:<graphql_port>` with **CORS enabled** for all main endpoints.
Endpoint URL: `php/server/query_graphql.php`
Host: `same as front end (web ui)`
Port: `20212` or as defined by the `GRAPHQL_PORT` setting
CORS configuration: You can limit allowed CORS origins with the `CORS_ORIGINS` environment variable. Set it to a comma-separated list of origins (for example: `CORS_ORIGINS="https://example.com,http://localhost:3000"`). The server parses this list at startup and only allows origins that begin with `http://` or `https://`. If `CORS_ORIGINS` is unset or parses to an empty list, the API falls back to a safe development default list (localhosts) and will include `*` as a last-resort permissive origin.
### Example Query to Fetch Devices
---
First, let's define the GraphQL query to fetch devices with pagination and sorting options.
## Authentication
```graphql
queryGetDevices($options:PageQueryOptionsInput){
devices(options:$options){
devices{
rowid
devMac
devName
devOwner
devType
devVendor
devLastConnection
devStatus
}
count
}
}
All endpoints require an API token provided in the HTTP headers:
```http
Authorization: Bearer <API_TOKEN>
```
### `curl` Command
You can use the following `curl` command to execute the query.
- The `query` parameter contains the GraphQL query as a string.
- The `variables` parameter contains the input variables for the query.
2.**Query Variables**:
-`page`: Specifies the page number of results to fetch.
-`limit`: Specifies the number of results per page.
-`sort`: Specifies the sorting options, with `field` being the field to sort by and `order` being the sort order (`asc` for ascending or `desc` for descending).
-`search`: A search term to filter the devices.
-`status`: The status filter to apply (valid values are `my_devices` (determined by the `UI_MY_DEVICES` setting), `connected`, `favorites`, `new`, `down`, `archived`, `offline`).
3.**`curl` Command**:
- The `-X POST` option specifies that we are making a POST request.
- The `-H "Content-Type: application/json"` option sets the content type of the request to JSON.
- The `-d` option provides the request payload, which includes the GraphQL query and variables.
### Sample Response
The response will be in JSON format, similar to the following:
If the token is missing or invalid, the server will return:
```json
{
"data":{
"devices":{
"devices":[
{
"rowid":1,
"devMac":"00:11:22:33:44:55",
"devName":"Device 1",
"devOwner":"Owner 1",
"devType":"Type 1",
"devVendor":"Vendor 1",
"devLastConnection":"2025-01-01T00:00:00Z",
"devStatus":"connected"
},
{
"rowid":2,
"devMac":"66:77:88:99:AA:BB",
"devName":"Device 2",
"devOwner":"Owner 2",
"devType":"Type 2",
"devVendor":"Vendor 2",
"devLastConnection":"2025-01-02T00:00:00Z",
"devStatus":"connected"
}
],
"count":2
}
}
"success":false,
"message":"ERROR: Not authorized",
"error":"Forbidden"
}
```
## API Endpoint: JSON files
HTTP Status: **403 Forbidden**
These API endpoint are static files, that are periodically updated.
Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
### When are the endpoints updated
The endpoints are updated when objects in the API endpoints are changed.
### Location of the endpoints
In the container, these files are located under the `/app/api/` folder. You can access them via the `/php/server/query_json.php?file=user_notifications.json` endpoint.
### Available endpoints
You can access the following files:
| File name | Description |
|----------------------|----------------------|
| `notification_text.txt` | The plain text version of the last notification. |
| `notification_text.html` | The full HTML of the last email notification. |
| `notification_json_final.json` | The json version of the last notification (e.g. used for webhooks - [sample JSON](https://github.com/jokob-sk/NetAlertX/blob/main/front/report_templates/webhook_json_sample.json)). |
| `table_devices.json` | The current (at the time of the last update as mentioned above on this page) state of all of the available Devices detected by the app. |
| `table_plugins_events.json` | The list of the unprocessed (pending) notification events (plugins_events DB table). |
| `table_plugins_history.json` | The list of notification events history. |
| `table_plugins_objects.json` | The content of the plugins_objects table. Find more info on the [Plugin system here](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins)|
| `language_strings.json` | The content of the language_strings table, which in turn is loaded from the plugins `config.json` definitions. |
| `table_custom_endpoint.json` | A custom endpoint generated by the SQL query specified by the `API_CUSTOM_SQL` setting. |
| `table_settings.json` | The content of the settings table. |
| `app_state.json` | Contains the current application state. |
Current/latest state of the aforementioned files depends on your settings.
### JSON Data format
The endpoints starting with the `table_` prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:
```JSON
{
"data":[
{
"db_column_name":"data",
"db_column_name2":"data2"
},
{
"db_column_name":"data3",
"db_column_name2":"data4"
}
]
}
## Base URL
```
Example JSON of the `table_devices.json` endpoint with two Devices (database rows):
Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
NetAlertX includes an **MCP (Model Context Protocol) Server Bridge** that provides AI assistants access to NetAlertX functionality through standardized tools. MCP endpoints are available at `/mcp/sse/*` paths and mirror the functionality of standard REST endpoints:
The **Database Query API** provides direct, low-level access to the NetAlertX database. It allows **read, write, update, and delete** operations against tables, using **base64-encoded** SQL or structured parameters.
> [!Warning]
> This API is primarily used internally to generate and render the application UI. These endpoints are low-level and powerful, and should be used with caution. Wherever possible, prefer the [standard API endpoints](API.md). Invalid or unsafe queries can corrupt data.
> If you need data in a specific format that is not already provided, please open an issue or pull request with a clear, broadly useful use case. This helps ensure new endpoints benefit the wider community rather than relying on raw database queries.
---
## Authentication
All `/dbquery/*` endpoints require an API token in the HTTP headers:
```http
Authorization: Bearer <API_TOKEN>
```
If the token is missing or invalid (HTTP 403):
```json
{
"success":false,
"message":"ERROR: Not authorized",
"error":"Forbidden"
}
```
---
## Endpoints
### 1. `POST /dbquery/read`
Execute a **read-only** SQL query (e.g., `SELECT`).
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/read"\
-H "Authorization: Bearer <API_TOKEN>"\
-H "Accept: application/json"\
-H "Content-Type: application/json"\
-d '{
"rawSql": "U0VMRUNUICogRlJPTSBERVZJQ0VT"
}'
```
---
### 2. `POST /dbquery/update` (safer than `/dbquery/write`)
Update rows in a table by `columnName` + `id`. `/dbquery/update` is parameterized to reduce the risk of SQL injection, while `/dbquery/write` executes raw SQL directly.
#### Request Body
```json
{
"columnName":"devMac",
"id":["AA:BB:CC:DD:EE:FF"],
"dbtable":"Devices",
"columns":["devName","devOwner"],
"values":["Laptop","Alice"]
}
```
#### Response
```json
{"success":true,"updated_count":1}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/update"\
-H "Authorization: Bearer <API_TOKEN>"\
-H "Accept: application/json"\
-H "Content-Type: application/json"\
-d '{
"columnName": "devMac",
"id": ["AA:BB:CC:DD:EE:FF"],
"dbtable": "Devices",
"columns": ["devName", "devOwner"],
"values": ["Laptop", "Alice"]
}'
```
---
### 3. `POST /dbquery/write`
Execute a **write query** (`INSERT`, `UPDATE`, `DELETE`).
Manage a **single device** by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require **authorization** via Bearer token.
---
## 1. Retrieve Device Details
* **GET** `/device/<mac>`
Fetch all details for a single device, including:
* Computed status (`devStatus`) → `On-line`, `Off-line`, or `Down`
* Session and event counts (`devSessions`, `devEvents`, `devDownAlerts`)
* Presence hours (`devPresenceHours`)
* Children devices (`devChildrenDynamic`) and NIC children (`devChildrenNicsDynamic`)
**Special case**: `mac=new` returns a template for a new device with default values.
**Response** (success):
```json
{
"devMac":"AA:BB:CC:DD:EE:FF",
"devName":"Net - Huawei",
"devOwner":"Admin",
"devType":"Router",
"devVendor":"Huawei",
"devStatus":"On-line",
"devSessions":12,
"devEvents":5,
"devDownAlerts":1,
"devPresenceHours":32,
"devChildrenDynamic":[...],
"devChildrenNicsDynamic":[...],
...
}
```
**Error Responses**:
* Device not found → HTTP 404
* Unauthorized → HTTP 403
**MCP Integration**: Available as `get_device_info` and `set_device_alias` tools. See [MCP Server Bridge API](API_MCP.md).
---
## 2. Update Device Fields
* **POST** `/device/<mac>`
Create or update a device record.
**Request Body**:
```json
{
"devName":"New Device",
"devOwner":"Admin",
"createNew":true
}
```
**Behavior**:
* If `createNew=true` → creates a new device
* Otherwise → updates existing device fields
**Response**:
```json
{
"success":true
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
## 3. Delete a Device
* **DELETE** `/device/<mac>/delete`
Deletes the device with the given MAC.
**Response**:
```json
{
"success":true
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
## 4. Delete All Events for a Device
* **DELETE** `/device/<mac>/events/delete`
Removes all events associated with a device.
**Response**:
```json
{
"success":true
}
```
---
## 5. Reset Device Properties
* **POST** `/device/<mac>/reset-props`
Resets the device's custom properties to default values.
**Request Body**: Optional JSON for additional parameters.
**Response**:
```json
{
"success":true
}
```
---
## 6. Copy Device Data
* **POST** `/device/copy`
Copy all data from one device to another. If a device exists with `macTo`, it is replaced.
**Request Body**:
```json
{
"macFrom":"AA:BB:CC:DD:EE:FF",
"macTo":"11:22:33:44:55:66"
}
```
**Response**:
```json
{
"success":true,
"message":"Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66"
}
```
**Error Responses**:
* Missing `macFrom` or `macTo` → HTTP 400
* Unauthorized → HTTP 403
---
## 7. Update a Single Column
* **POST** `/device/<mac>/update-column`
Update one specific column for a device.
**Request Body**:
```json
{
"columnName":"devName",
"columnValue":"Updated Device Name"
}
```
**Response** (success):
```json
{
"success":true
}
```
**Error Responses**:
* Device not found → HTTP 404
* Missing `columnName` or `columnValue` → HTTP 400
* Unauthorized → HTTP 403
---
## Example `curl` Requests
**Get Device Details**:
```bash
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF"\
-H "Authorization: Bearer <API_TOKEN>"
```
**Update Device Fields**:
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF"\
The Devices Collection API provides operations to **retrieve, manage, import/export, and filter devices** in bulk. All endpoints require **authorization** via Bearer token.
---
## Endpoints
### 1. Get All Devices
* **GET** `/devices`
Retrieves all devices from the database.
**Response** (success):
```json
{
"success":true,
"devices":[
{
"devName":"Net - Huawei",
"devMAC":"AA:BB:CC:DD:EE:FF",
"devIP":"192.168.1.1",
"devType":"Router",
"devFavorite":0,
"devStatus":"online"
},
...
]
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
### 2. Delete Devices by MAC
* **DELETE** `/devices`
Deletes devices by MAC address. Supports exact matches or wildcard `*`.
**Request Body**:
```json
{
"macs":["AA:BB:CC:DD:EE:FF","11:22:33:*"]
}
```
**Behavior**:
* If `macs` is omitted or `null` → deletes **all devices**.
* Wildcards `*` match multiple devices.
**Response**:
```json
{
"success":true,
"deleted_count":5
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
### 3. Delete Devices with Empty MACs
* **DELETE** `/devices/empty-macs`
Removes all devices where MAC address is null or empty.
**Response**:
```json
{
"success":true,
"deleted":3
}
```
---
### 4. Delete Unknown Devices
* **DELETE** `/devices/unknown`
Deletes devices with names marked as `(unknown)` or `(name not found)`.
**Response**:
```json
{
"success":true,
"deleted":2
}
```
---
### 5. Export Devices
* **GET** `/devices/export` or `/devices/export/<format>`
Exports all devices in **CSV** (default) or **JSON** format.
**Query Parameter / URL Parameter**:
*`format` (optional) → `csv` (default) or `json`
**CSV Response**:
* Returns as a downloadable CSV file: `Content-Disposition: attachment; filename=devices.csv`
The Device Field Lock/Unlock feature allows users to lock specific device fields to prevent plugin overwrites. This is part of the authoritative device field update system that ensures data integrity while maintaining flexibility for user customization.
## Concepts
### Tracked Fields
Only certain device fields support locking. These are the fields that can be modified by both plugins and users:
-`devName` - Device name/hostname
-`devVendor` - Device vendor/manufacturer
-`devFQDN` - Fully qualified domain name
-`devSSID` - Network SSID
-`devParentMAC` - Parent device MAC address
-`devParentPort` - Parent device port
-`devParentRelType` - Parent device relationship type
-`devVlan` - VLAN identifier
### Field Source Tracking
Every tracked field has an associated `*Source` field that indicates where the current value originated:
-`NEWDEV` - Created via the UI as a new device
-`USER` - Manually edited by a user
-`LOCKED` - Field is locked; prevents any plugin overwrites
- Plugin name (e.g., `UNIFIAPI`, `PIHOLE`) - Last updated by this plugin
### Locking Mechanism
When a field is **locked**, its source is set to `LOCKED`. This prevents plugin overwrites based on the authorization logic:
1. Plugin wants to update field
2. Authoritative handler checks field's `*Source` value
3. If `*Source` == `LOCKED`, plugin update is rejected
4. User can still manually unlock the field
When a field is **unlocked**, its source is set to `NEWDEV`, allowing plugins to resume updates.
## Endpoints
### Lock or Unlock a Field
```
POST /device/{mac}/field/lock
Authorization: Bearer {API_TOKEN}
Content-Type: application/json
{
"fieldName": "devName",
"lock": true
}
```
#### Parameters
-`mac` (path, required): Device MAC address (e.g., `AA:BB:CC:DD:EE:FF`)
-`fieldName` (body, required): Name of the field to lock/unlock. Must be one of the tracked fields listed above.
-`lock` (body, required): Boolean. `true` to lock, `false` to unlock.
#### Responses
**Success (200)**
```json
{
"success":true,
"message":"Field devName locked",
"fieldName":"devName",
"locked":true
}
```
**Bad Request (400)**
```json
{
"success":false,
"error":"fieldName is required"
}
```
```json
{
"success":false,
"error":"Field 'devInvalidField' cannot be locked"
}
```
**Unauthorized (403)**
```json
{
"success":false,
"error":"Unauthorized"
}
```
**Not Found (404)**
```json
{
"success":false,
"error":"Device not found"
}
```
## Examples
### Lock a Device Name
Prevent the device name from being overwritten by plugins:
```bash
curl -X POST https://your-netalertx.local/api/device/AA:BB:CC:DD:EE:FF/field/lock \
-H "Authorization: Bearer your-api-token"\
-H "Content-Type: application/json"\
-d '{
"fieldName": "devName",
"lock": true
}'
```
### Unlock a Field
Allow plugins to resume updating a field:
```bash
curl -X POST https://your-netalertx.local/api/device/AA:BB:CC:DD:EE:FF/field/lock \
-H "Authorization: Bearer your-api-token"\
-H "Content-Type: application/json"\
-d '{
"fieldName": "devName",
"lock": false
}'
```
## UI Integration
The Device Edit form displays lock/unlock buttons for all tracked fields:
1.**Lock Button** (🔒): Click to prevent plugin overwrites
2.**Unlock Button** (🔓): Click to allow plugin overwrites again
3.**Source Indicator**: Shows current field source (USER, LOCKED, NEWDEV, or plugin name)
### Authorization Handler
The authoritative field update logic prevents plugin overwrites:
1. Plugin provides new value for field via plugin config `SET_ALWAYS`/`SET_EMPTY`
2. Authoritative handler (in DeviceInstance) checks `{field}Source` value
3. If source is `LOCKED` or `USER`, plugin update is rejected
4. If source is `NEWDEV` or plugin name, plugin update is accepted
GraphQL queries are **read-optimized for speed**. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:
* Devices
* Settings
* Language Strings (LangStrings)
## Endpoints
* **GET** `/graphql`
Returns a simple status message (useful for browser or debugging).
* **POST** `/graphql`
Execute GraphQL queries against the `devicesSchema`.
"setDescription":"Types of devices considered as network infrastructure.",
"setType":"list",
"setOptions":"[\"Router\",\"Switch\",\"AP\"]",
"setGroup":"Network",
"setValue":"[\"Router\",\"Switch\"]",
"setEvents":null,
"setOverriddenByEnv":true
}
],
"count":2
}
}
}
```
---
## LangStrings Query
The **LangStrings query** provides access to localized strings. Supports filtering by `langCode` and `langStringKey`. If the requested string is missing or empty, you can optionally fallback to `en_us`.
"langStringText":"Other, non-device scanner plugins that are currently enabled."// falls back to en_us if empty
}
]
}
}
}
```
---
## Notes
* Device, settings, and LangStrings queries can be combined in **one request** since GraphQL supports batching.
* The `fallback_to_en` feature ensures UI always has a value even if a translation is missing.
* Data is **cached in memory** per JSON file; changes to language or plugin files will only refresh after the cache detects a file modification.
* The `setOverriddenByEnv` flag helps identify setting values that are locked at container runtime.
* The schema is **read-only** — updates must be performed through other APIs or configuration management. See the other [API](API.md) endpoints for details.
Manage or purge application log files stored under `/app/log` and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.
Only specific, pre-approved log files can be purged for security and stability reasons.
---
## Delete (Purge) a Log File
* **DELETE** `/logs?file=<log_file>` → Purge the contents of an allowed log file.
**Query Parameter:**
*`file` → The name of the log file to purge (e.g., `app.log`, `stdout.log`)
**Allowed Files:**
```
app.log
IP_changes.log
stdout.log
stderr.log
app.php_errors.log
execution_queue.log
db_is_locked.log
```
**Authorization:**
Requires a valid API token in the `Authorization` header.
The **MCP (Model Context Protocol) Server Bridge** provides AI assistants with standardized access to NetAlertX functionality through tools and server-sent events. This enables AI systems to interact with your network monitoring data in real-time.
---
## Overview
The MCP Server Bridge exposes NetAlertX functionality as **MCP Tools** that AI assistants can call to:
- Search and retrieve device information
- Trigger network scans
- Get network topology and events
- Wake devices via Wake-on-LAN
- Access open port information
- Set device aliases
All MCP endpoints mirror the functionality of standard REST endpoints but are optimized for AI assistant integration.
B -->|JSON-RPC Messages| C[MCP Bridge<br/>api_server_start.py]
C -->|Tool Calls| D[NetAlertX Tools<br/>Device/Network APIs]
D -->|Response Data| C
C -->|JSON Response| B
B -->|Stream Events| A
```
### MCP Tool Integration
```mermaid
sequenceDiagram
participant AI as AI Assistant
participant MCP as MCP Server (:20212)
participant API as NetAlertX API (:20211)
participant DB as SQLite Database
AI->>MCP: 1. Connect via SSE
MCP-->>AI: 2. Session established
AI->>MCP: 3. tools/list request
MCP->>API: 4. GET /mcp/sse/openapi.json
API-->>MCP: 5. Available tools spec
MCP-->>AI: 6. Tool definitions
AI->>MCP: 7. tools/call: search_devices
MCP->>API: 8. POST /devices/search
API->>DB: 9. Query devices
DB-->>API: 10. Device data
API-->>MCP: 11. JSON response
MCP-->>AI: 12. Tool result
```
### Component Architecture
```mermaid
graph LR
subgraph "AI Client"
A[Claude Desktop]
B[Custom MCP Client]
end
subgraph "NetAlertX MCP Server (:20212)"
C[SSE Endpoint<br/>/mcp/sse]
D[Message Handler<br/>/mcp/messages]
E[OpenAPI Spec<br/>/mcp/sse/openapi.json]
end
subgraph "NetAlertX API Server (:20211)"
F[Device APIs<br/>/devices/*]
G[Network Tools<br/>/nettools/*]
H[Events API<br/>/events/*]
end
subgraph "Backend"
I[SQLite Database]
J[Network Scanners]
K[Plugin System]
end
A -.->|Bearer Auth| C
B -.->|Bearer Auth| C
C --> D
C --> E
D --> F
D --> G
D --> H
F --> I
G --> J
H --> I
```
---
## Authentication
MCP endpoints use the same **Bearer token authentication** as REST endpoints:
```http
Authorization: Bearer <API_TOKEN>
```
Unauthorized requests return HTTP 403:
```json
{
"success":false,
"message":"ERROR: Not authorized",
"error":"Forbidden"
}
```
---
## MCP Connection Endpoint
### Server-Sent Events (SSE)
* **GET/POST** `/mcp/sse`
Main MCP connection endpoint for AI clients. Establishes a persistent connection using Server-Sent Events for real-time communication between AI assistants and NetAlertX.
**Connection Example**:
```javascript
consteventSource=newEventSource('/mcp/sse',{
headers:{
'Authorization':'Bearer <API_TOKEN>'
}
});
eventSource.onmessage=function(event){
constresponse=JSON.parse(event.data);
console.log('MCP Response:',response);
};
```
---
## OpenAPI Specification
### Get MCP Tools Specification
* **GET** `/mcp/sse/openapi.json`
Returns the OpenAPI specification for all available MCP tools, describing the parameters and schemas for each tool.
**Response**:
```json
{
"openapi":"3.0.0",
"info":{
"title":"NetAlertX Tools",
"version":"1.1.0"
},
"servers":[{"url":"/"}],
"paths":{
"/devices/by-status":{
"post":{"operationId":"list_devices"}
},
"/device/{mac}":{
"post":{"operationId":"get_device_info"}
},
"/devices/search":{
"post":{"operationId":"search_devices"}
}
}
}
```
---
## Available MCP Tools
### Device Management Tools
| Tool | Endpoint | Description |
|------|----------|-------------|
| `list_devices` | `/devices/by-status` | List devices by online status |
| `get_device_info` | `/device/{mac}` | Get detailed device information |
| `search_devices` | `/devices/search` | Search devices by MAC, name, or IP |
| `get_latest_device` | `/devices/latest` | Get most recently connected device |
| `set_device_alias` | `/device/{mac}/set-alias` | Set device friendly name |
### Network Tools
| Tool | Endpoint | Description |
|------|----------|-------------|
| `trigger_scan` | `/nettools/trigger-scan` | Trigger network discovery scan to find new devices. |
| `run_nmap_scan` | `/nettools/nmap` | Perform NMAP scan on a target to identify open ports. |
| `get_open_ports` | `/device/open_ports` | Get stored NMAP open ports. Use `run_nmap_scan` first if empty. |
| `wol_wake_device` | `/nettools/wakeonlan` | Wake device using Wake-on-LAN |
| `get_network_topology` | `/devices/network/topology` | Get network topology map |
### Event & Monitoring Tools
| Tool | Endpoint | Description |
|------|----------|-------------|
| `get_recent_alerts` | `/events/recent` | Get events from last 24 hours |
| `get_last_events` | `/events/last` | Get 10 most recent events |
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.