Uses the new run_docker_tests.sh script which is self-contained and handles all dependencies and test execution within a Docker container. This ensures that the CI environment is consistent with the local devcontainer environment.
Fixes an issue where the job name 'test' was considered invalid. Renamed to 'docker-tests'.
Ensures that tests marked as 'feature_complete' are also excluded from the test run.
Introduces a comprehensive script to build, run, and test NetAlertX within a Dockerized devcontainer environment, replicating the setup defined in . This script ensures consistency for CI/CD pipelines and local development.
The script addresses several environmental challenges:
- Properly builds the Docker image.
- Starts the container with necessary capabilities and host-gateway.
- Installs Python test dependencies (, , ) into the virtual environment.
- Executes the script to initialize services.
- Implements a healthcheck loop to wait for services to become fully operational before running tests.
- Configures to use a writable cache directory () to avoid permission issues.
- Includes a workaround to insert a dummy 'internet' device into the database, resolving a flakiness in caused by its reliance on unpredictable database state without altering any project code.
This script ensures a green test suite, making it suitable for automated testing in environments like GitHub Actions.
## Problem
PR #1182 introduced SafeConditionBuilder to prevent SQL injection, but it only
supported single-clause conditions. This broke notification filters using multiple
AND/OR clauses, causing user filters like:
`AND devLastIP NOT LIKE '192.168.50.%' AND devLastIP NOT LIKE '192.168.60.%'...`
to be rejected with "Unsupported condition pattern" errors.
## Root Cause
The `_parse_condition()` method used regex patterns that only matched single
conditions. When multiple clauses were chained, the entire string failed to match
any pattern and was rejected for security.
## Solution
Enhanced SafeConditionBuilder with compound condition support:
1. **Added `_is_compound_condition()`** - Detects multiple logical operators
while respecting quoted strings
2. **Added `_parse_compound_condition()`** - Splits compound conditions into
individual clauses and parses each one
3. **Added `_split_by_logical_operators()`** - Intelligently splits on AND/OR
while preserving operators in quoted strings
4. **Refactored `_parse_condition()`** - Routes to compound or single parser
5. **Created `_parse_single_condition()`** - Handles individual clauses (from
original `_parse_condition` logic)
## Testing
- Added comprehensive test suite (19 tests, 100% passing)
- Tested user's exact failing filter (6 AND clauses with NOT LIKE)
- Verified backward compatibility with single conditions
- Validated security (SQL injection attempts still blocked)
- Tested edge cases (mixed AND/OR, whitespace, empty conditions)
## Impact
- ✅ Fixes reported issue #1210
- ✅ Maintains all security protections from PR #1182
- ✅ Backward compatible with existing single-clause filters
- ✅ No breaking changes to API
Fixes#1210🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
is the schedule input is incorrect, an error message is logged and the plugin will NOT run.
Creating a dummy schedule would throw the system out of balance as there's the danger of schedules running out of sync.
`setup.sh` and `start.sh` combined into a single script
netalertx now starts and runs via systemd unit, can be started, stopped and restarted
`systemctl start netalertx`
`systemctl stop netalertx`
`systemctl status netalertx`
etc
Logs to `journalctl` and output can be followed with `journalctl -f`
Amalgamated chmods
tuned chmods based on earlier feedback and discussion
install script accepts command line parameter:
- 'install' to continue and DELETE ALL!
- 'update' to just update from GIT (keeps your db and settings)
- 'start' to do nothing, leave install as-is (just run the start script, set up services etc)
Please have a look, comments welcome :-)
The stdout and stderr are useful logs when debugging and trying to figure out why plugin output is causing backend to stop and exception. This commit enables output redirection to `/app/stdout.log` and `/app/stderr.log` from the backend. This may need backporting to production as it appears the fields are unused in the backend.
Additionally, when searching logs in the UI, the old logs appear first and your search results will invariably find old information when searching with ctrl-f-"string"-enter. So upon backend start and to keep them relevant, the stdout, stderr, and app logs are cleared.
- Added build_condition method to SafeConditionBuilder for structured conditions
- Fixed test_multiple_conditions_valid to test single conditions (more secure)
- Fixed test_build_condition tests by implementing the missing method
- Updated documentation to be more concise and human-friendly
- All 19 security tests now passing
- All SQL injection vectors properly blocked
Test Results:
✅ 19/19 tests passing
✅ All SQL injection attempts blocked
✅ Parameter binding working correctly
✅ Whitelist validation effective
The implementation provides comprehensive protection while maintaining
usability and backward compatibility.
avoiding repeat code in notification_instance.
Still a refactor would be great as the plugins_events table is getting filled in plugin.py and thus should be cleared in there.
Added some of the hand picked suggestions, including some outside of the previous changes.
Some will improve documentation, some readability and some will affect performance.
Adds bare metal installer for ubuntu. Tested with version 24.04. You may want to or have to change the PHPVERSION variable in the start script for other versions
Adds templates for enhancements to differentiate enhancing existing features and adding whole new ones.
Refactor/Code quality is mostly for dev/contributor use for doc purposes.
Security report is essential and also directs them to reach out with sensitive details directly
Translation requests added to allow additional accessibility to be requested as-needed and to allow prioritization based on need.
Add Table of Contents
Add Quick Start guide for Docker and Home Assistant
Fix typo in line 67 (was 33) lits -> list
Add Security & Privacy section
Add FAQ
Add Known Issues
Adds a new GitHub issue template for reporting documentation-related suggestions, inconsistencies, or improvements.
This template helps contributors provide clear, categorized feedback on docs, making it easier to track and prioritize structural or content-related issues separately from codebase bugs or feature requests.
Includes fields for:
- Affected document/section
- Description of the issue
- Proposed solution
- Type of documentation issue
- Optional implementation offer
Helps improve overall clarity, uniformity, and contributor experience with documentation.
This patch improves the resilience of the guess_icon function by sanitizing mac, vendor, and name fields to avoid crashes caused by unexpected data types (e.g., numeric hostnames).
Specifically:
mac is now cast to a string before being uppercased, with a newly added fallback to "00:00:00:00:00:00" if empty or invalid.
vendor is sanitized to a string before lowercasing, still defaulting to "unknown".
name is cast to a string before lowercasing, still falling back to "(unknown)" when empty.
This change not only resolves the error caused by numeric-only hostnames (which triggered an AttributeError due to calling .lower() on an int), but also proactively prevents similar crashes from malformed or unexpected input in the future.
References: Fixes issue #1088 and also let's me sleep a little easier tonight.
1/ Fix {} arround user_notifications.json file => if there is just one file, this create a file named "{user_notifications.json} ;)
2/ Fix group for the above file
This wasn't working for EMQX due to callback trigger delays it never connected. Also added a reconnect feature and a client id so it looks better in the EMQX connection dashboard. No confirmed to be working with Mosquitto and EMQX
Removed DEFER from ui_components as the device details page would not populate any more and the browser console would throw errors re function not found
Chnaged client creation logic to V2 API as we are already using Paho2.0. Chnaged version selection from Paho version (which should not have been a user choice) to MQTT Protocol selection, which can be v3 or v5. Most modern MQQTT brokers like Mosquitta or EMQX support v5.
This devcontainer replicates the production container as closely as practical, with a few development-oriented differences.
Key behavior
- No init process: Services are managed by shell scripts using killall, setsid, and nohup. Startup and restarts are script-driven rather than supervised by an init system.
- Autogenerated Dockerfile: The effective devcontainer Dockerfile is generated on demand by `.devcontainer/scripts/generate-dockerfile.sh`. It combines the root `Dockerfile` (with certain COPY instructions removed) and an extra "devcontainer" stage from `.devcontainer/resources/devcontainer-Dockerfile`. When you change the resource Dockerfile, re-run the generator to refresh `.devcontainer/Dockerfile`.
- Where to put setup: Prefer baking setup into `.devcontainer/resources/devcontainer-Dockerfile`. Use `.devcontainer/scripts/setup.sh` only for steps that must happen at container start (e.g., cleaning up nginx/php ownership, creating directories, touching runtime files) or depend on runtime paths.
Debugging (F5)
The Frontend and backend run in debug mode always. You can attach your debugger at any time.
- Python Backend Debug: Attach - The backend runs with a debugger on port 5678. Set breakpoints in the code and press F5 to begin triggering them.
- PHP Frontend (XDebug) Xdebug listens on 9003. Start listening and use an Xdebug extension in your browser to debug PHP.
Common workflows (F1->Tasks: Run Task)
- Regenerate the devcontainer Dockerfile: Run the VS Code task "Generate Dockerfile" or execute `.devcontainer/scripts/generate-dockerfile.sh`. The result is `.devcontainer/Dockerfile`.
- Re-run startup provisioning: Use the task "Re-Run Startup Script" to execute `.devcontainer/scripts/setup.sh` in the container.
- Start services:
- Backend (GraphQL/Flask): `.devcontainer/scripts/restart-backend.sh` starts it under debugpy and logs to `/app/log/app.log`
- Frontend (nginx + PHP-FPM): Started via setup.sh; can be restarted by the task "Start Frontend (nginx and PHP-FPM)".
"Workspace Instructions":"printf '\n\n<> DevContainer Ready!\n\n📁 To access /tmp folders in the workspace:\n File → Open Workspace from File → NetAlertX.code-workspace\n\n📖 See .devcontainer/WORKSPACE.md for details\n\n'"
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
validations:
required:true
required:true
- type:checkboxes
attributes:
label:Am I willing to test this? 🧪
description:I rely on the community to test unreleased features. If you are requesting a feature, please be willing to test it within 48h of test request. Otherwise, the feature might be pulled from the code base.
options:
- label:I will do my best to test this feature on the `netlertx-dev` image when requested within 48h and report bugs to help deliver a great user experience for everyone and not to break existing installations.
required:true
- type:checkboxes
attributes:
label:Can I help implement this? 👩💻👨💻
description:The maintainer will provide guidance and help. The implementer will read the PR guidelines https://jokob-sk.github.io/NetAlertX/DEV_ENV_SETUP/
- Bare-metal (community only support - Check Discord)
validations:
required:true
- type:textarea
attributes:
label:pialert.log
description:|
Logs with debug enabled (https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DEBUG_TIPS.md) ⚠
***Generally speaking, all bug reports should have logs provided.***
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
You can use `tail -100 /home/pi/pialert/front/log/pialert.log` in teh container if you have troubles getting to the log files.
validations:
required:false
- type:checkboxes
attributes:
label:Debug enabled
description:I confirm I enabled `debug`
label:Debug or Trace enabled
description:I confirm I set `LOG_LEVEL` to `debug` or `trace`
options:
- label:I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
required:true
- type:textarea
attributes:
label:Relevant `app.log` section
value:|
```
PASTE LOG HERE. Using the triple backticks preserves format.
```
description:|
Logs with debug enabled (https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md) ⚠
***Generally speaking, all bug reports should have logs provided.***
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files or send them to netalertx@gmail.com with the issue number.
validations:
required:false
- type:textarea
attributes:
label:Docker Logs
description:|
You can retrieve the logs from Portainer -> Containers -> your NetAlertX container -> Logs or by running `sudo docker logs netalertx`.
value:|
```
PASTE DOCKER LOG HERE. Using the triple backticks preserves format.
description:'When submitting an issue enable LOG_LEVEL="trace" and re-search first.'
labels:['Setup 📥']
body:
- type:checkboxes
attributes:
label:Did I research?
description:Please confirm you checked the usual places before opening a setup support request.
options:
- label:I have searched the docs https://jokob-sk.github.io/NetAlertX/
required:true
- label:I have searched the existing open and closed issues
required:true
- label:I confirm my SCAN_SUBNETS is configured and tested as per https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md
required:true
- type:checkboxes
attributes:
label:The issue occurs in the following browsers. Select at least 2.
description:This step helps me understand if this is a cache or browser-specific issue.
options:
- label:"Firefox"
- label:"Chrome"
- label:"Other (unsupported) - PRs welcome"
- label:"N/A - This is an issue with the backend"
- type:textarea
attributes:
label:What I want to do
description:Describe what you want to achieve.
validations:
required:false
- type:textarea
attributes:
label:Relevant settings you changed
description:|
Paste a screenshot or setting values of the settings you changed.
validations:
required:false
- type:textarea
attributes:
label:docker-compose.yml
description:|
Paste your `docker-compose.yml`
render:python
validations:
required:false
- type:dropdown
id:installation_type
attributes:
label:What installation are you running?
options:
- Production (netalertx)
- Dev (netalertx-dev)
- Home Assistant (addon)
- Home Assistant fa (full-access addon)
- Bare-metal (community only support - Check Discord)
validations:
required:true
- type:textarea
attributes:
label:app.log
description:|
Logs with debug enabled (https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md) ⚠
***Generally speaking, all bug reports should have logs provided.***
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files.
validations:
required:false
- type:checkboxes
attributes:
label:Debug enabled
description:I confirm I enabled `debug`
options:
- label:I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
<!-- Describe the purpose of this PR in one or two sentences. Example: "This PR updates the contributor guidelines by merging frontend and backend sections." -->
---
## 📝 What’s Changed?
<!-- Briefly outline what parts of the documentation were added, changed, removed, or reorganized -->
- Combined frontend and backend development guidelines into a single file
- Updated `mkdocs.yml` to reflect new structure
- Added clarification on contribution process
- Fixed outdated links in sidebar
---
## 🔍 Related Issue(s)
<!-- Link to related issues, discussions, or context (e.g., closes #123) -->
---
## ✅ Checklist
- [ ] I followed the formatting/style of existing documentation
- [ ] I have read the [Contribution Guidelines](../../CONTRIBUTING)
- [ ] I updated `mkdocs.yml` if necessary
- [ ] I verified links and references still work
- [ ] I checked that my changes improve clarity, structure, or accuracy
- [ ] I'm open to feedback and suggestions
---
## 🙋 Additional Notes
<!-- Optional: Include anything you want reviewers to be aware of -->
This is NetAlertX — network monitoring & alerting. NetAlertX provides Network inventory, awareness, insight, categorization, intruder and presence detection. This is a heavily community-driven project, welcoming of all contributions.
You are expected to be concise, opinionated, and biased toward security and simplicity.
## Architecture (what runs where)
- Backend (Python): main loop + GraphQL/REST endpoints orchestrate scans, plugins, workflows, notifications, and JSON export.
- API JSON Cache for UI: generated under `api/*.json`
Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `schedule`, `always_after_scan`, `before_name_updates`, `on_new_device`, `on_notification`, plus ad‑hoc `run` via execution queue. Plugins execute as scripts that write result logs for ingestion.
## Plugin patterns that matter
- Manifest lives at `front/plugins/<code_name>/config.json`; `code_name` == folder, `unique_prefix` drives settings and filenames (e.g., `ARPSCAN`).
- Control via settings: `<PREF>_RUN` (phase), `<PREF>_RUN_SCHD` (cron-like), `<PREF>_CMD` (script path), `<PREF>_RUN_TIMEOUT`, `<PREF>_WATCH` (diff columns).
- Data contract: scripts write `/tmp/log/plugins/last_result.<PREF>.log` (pipe‑delimited: 9 required cols + optional 4). Use `front/plugins/plugin_helper.py`’s `Plugin_Objects` to sanitize text and normalize MACs, then `write_result_file()`.
* publisher: Sends notifications to services. Runs `on_notification`. Data source: self.
* dev scanner: Creates devices and manages online/offline status. Runs on `schedule`. Data source: self / SQLite DB.
* name discovery: Discovers device names via various protocols. Runs `before_name_updates` or on `schedule`. Data source: self.
* importer: Imports devices from another service. Runs on `schedule`. Data source: self / SQLite DB.
* system: Provides core system functionality. Runs on `schedule` or is always on. Data source: self / Template.
* other: Miscellaneous plugins. Runs at various times. Data source: self / Template.
### Plugin logging & outputs
- Always check relevant logs first.
- Use logging as shown in other plugins.
- Collect results with `Plugin_Objects.add_object(...)` during processing and call `plugin_objects.write_result_file()` exactly once at the end of the script.
- Prefer to log a brief summary before writing (e.g., total objects added) to aid troubleshooting; keep logs concise at `info` level and use `verbose` or `debug` for extra context.
- Do not write ad‑hoc files for results; the only consumable output is `last_result.<PREF>.log` generated by `Plugin_Objects`.
## API/Endpoints quick map
- Flask app: `server/api_server/api_server_start.py` exposes routes like `/device/<mac>`, `/devices`, `/devices/export/{csv,json}`, `/devices/import`, `/devices/totals`, `/devices/by-status`, plus `nettools`, `events`, `sessions`, `dbquery`, `metrics`, `sync`.
- Authorization: all routes expect header `Authorization: Bearer <API_TOKEN>` via `get_setting_value('API_TOKEN')`.
## Conventions & helpers to reuse
- Settings: add/modify via `ccd()` in `server/initialise.py` or per‑plugin manifest. Never hardcode ports or secrets; use `get_setting_value()`.
- Logging: use `logger.mylog(level, [message])`; levels: none/minimal/verbose/debug/trace.
- Time/MAC/strings: `helper.py` (`timeNowDB`, `normalize_mac`, sanitizers). Validate MACs before DB writes.
- DB helpers: prefer `server/db/db_helper.py` functions (e.g., `get_table_json`, device condition helpers) over raw SQL in new paths.
## Dev workflow (devcontainer)
- **Devcontainer philosophy: brutal simplicity.** One user, everything writable, completely idempotent. No permission checks, no conditional logic, no sudo needed. If something doesn't work, tear down the wall and rebuild - don't patch. We unit test permissions in the hardened build.
- **Permissions:** Never `chmod` or `chown` during operations. Everything is already writable. If you need permissions, the devcontainer setup is broken - fix `.devcontainer/scripts/setup.sh` or `.devcontainer/resources/devcontainer-Dockerfile` instead.
- **Files & Paths:** Use environment variables (`NETALERTX_DB`, `NETALERTX_LOG`, etc.) everywhere. `/data` for persistent config/db, `/tmp` for runtime logs/api/nginx state. Never hardcode `/data/db` or relative paths.
- **Database reset:** Use the `[Dev Container] Wipe and Regenerate Database` task. Kills backend, deletes `/data/{db,config}/*`, runs first-time setup scripts. Clean slate, no questions.
- Services: use tasks to (re)start backend and nginx/PHP-FPM. Backend runs with debugpy on 5678; attach a Python debugger if needed.
- Run a plugin manually: `python3 front/plugins/<code_name>/script.py` (ensure `sys.path` includes `/app/front/plugins` and `/app/server` like the template).
- Testing: pytest available via Alpine packages. Tests live in `test/`; app code is under `server/`. PYTHONPATH is preconfigured to include workspace and `/opt/venv` site‑packages.
- **Subprocess calls:** ALWAYS set explicit timeouts. Default to 60s minimum unless plugin config specifies otherwise. Nested subprocess calls (e.g., plugins calling external tools) need their own timeout - outer plugin timeout won't save you.
## What “done right” looks like
- When adding a plugin, start from `front/plugins/__template`, implement with `plugin_helper`, define manifest settings, and wire phase via `<PREF>_RUN`. Verify logs in `/tmp/log/plugins/` and data in `api/*.json`.
- When introducing new config, define it once (core `ccd()` or plugin manifest) and read it via helpers everywhere.
- When exposing new server functionality, add endpoints in `server/api_server/*` and keep authorization consistent; update UI by reading/writing JSON cache rather than bypassing the pipeline.
- Offer a quick validation step (log line, API hit, or JSON export) for anything you add.
- Be blunt about risks and when you offer suggestions ensure they're also blunt,
- Ask for confirmation before making changes that run code or change multiple files.
- Make statements actionable and specific; propose exact edits.
- Request confirmation before applying changes that affect more than a single, clearly scoped line or file.
- Ask the user to debug something for an actionable value if you're unsure.
- Be sure to offer choices when appropriate.
- Always understand the intent of the user's request and undo/redo as needed.
- Above all, use the simplest possible code that meets the need so it can be easily audited and maintained.
- Always leave logging enabled. If there is a possiblity it will be difficult to debug with current logging, add more logging.
- Always run the testFailure tool before executing any tests to gather current failure information and avoid redundant runs.
- Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first.
- Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the exisiting testFailures and runTests results.
- Environment tools are designed specifically for your use in this project and running them in this order will give you the best results.
"detail":"Generates devcontainer configs from the template. This must be run after changes to devcontainer to combine/merge them into the final config used by VS Code. Note- this has no bearing on the production or test image.",
"detail":"DANGER! Prunes all unused Docker resources (images, containers, volumes, networks). Any stopped container will be wiped and data will be lost. Use with caution.",
"detail":"The startup script runs directly after the container is started. It reprovisions permissions, links folders, and performs other setup tasks. Run this if you have made changes to the setup script or need to reprovision the container.",
"command":"docker buildx build -t netalertx-test . && echo '🧪 Unit Test Docker image built: netalertx-test'",
"detail":"This must be run after changes to the container. Unit testing will not register changes until after this image is rebuilt. It takes about 30 seconds to build unless changes to the venv stage are made. venv takes 90s alone.",
"presentation":{
"echo":true,
"reveal":"always",
"panel":"shared",
"showReuseMessage":false,
"group":"Any"
},
"problemMatcher":[],
"group":{
"kind":"build",
"isDefault":false
},
"icon":{
"id":"beaker",
"color":"terminal.ansiBlue"
}
},
{
"label":"[Dev Container] Wipe and Regenerate Database",
"command":"docker compose up -d --build --force-recreate",
"detail":"Before launching, ensure VSCode Ports are closed and services are stopped. Tasks: Stop Frontend & Backend Services & Remote: Close Unused Forwarded Ports to ensure proper operation of the new container.",
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of
any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address,
without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official email address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at <jokob@duck.com>.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Ethical Use Clause (Project-Specific)
While NetAlertX is a tool designed to empower users with greater insight into their own networks, we expect and encourage all users to use this software **ethically and legally**.
- Do not use this software to scan or monitor networks without **explicit authorization**.
- Respect privacy, consent, and data protection laws applicable in your jurisdiction.
- Any use of NetAlertX for malicious surveillance, stalking, or unauthorized access is explicitly discouraged and may be grounds for removal from the community and revocation of support.
We reserve the right to take appropriate action to uphold the ethical integrity of this project.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the
[Contributor Covenant](https://www.contributor-covenant.org/), version 2.1,
The issue tracker is the preferred channel for bug reports, features requests and submitting pull requests.
Before submitting a new issue please spend a couple of minutes on research:
* Check [🛑 Common issues](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DEBUG_TIPS.md#common-issues)
* Check [💡 Closed issues](https://github.com/jokob-sk/Pi.Alert/issues?q=is%3Aissue+is%3Aclosed) if a similar issue was solved in the past.
## Pull-requests (PRs)
If you submit a PR please do check that your changes are backward compatible with existing installations. Existing features should be always preserved.
Get visibility of what's going on on your WIFI/LAN network. Schedule scans for devices, port changes and get alerts if unknown devices or changes are found. Write your own [Plugins](https://github.com/jokob-sk/Pi.Alert/tree/main/front/plugins#readme) with auto-generated UI and in-build notification system.
# NetAlertX - Network, presence scanner and alert framework
Get visibility of what's going on on your WIFI/LAN network and enable presence detection of important devices. Schedule scans for devices, port changes and get alerts if unknown devices or changes are found. Write your own [Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) with auto-generated UI and in-build notification system. Build out and easily maintain your network source of truth (NSoT) and device inventory.
> ⚠️ **Important:** The documentation has been recently updated and some instructions may have changed.
> If you are using the currently live production image, please follow the instructions on [Docker Hub](https://hub.docker.com/r/jokobsk/netalertx) for building and running the container.
> These docs reflect the latest development version and may differ from the production image.
Start NetAlertX in seconds with Docker:
```bash
docker run -d \
--network=host \
--restart unless-stopped \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
-v /etc/localtime:/etc/localtime:ro \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700\
-e PORT=20211\
-e APP_CONF_OVERRIDE='{"GRAPHQL_PORT":"20214"}'\
ghcr.io/jokob-sk/netalertx:latest
```
To deploy a containerized instance directly from the source repository, execute the following BASH sequence:
# To customize: edit docker-compose.yaml and run that last command again
```
Need help configuring it? Check the [usage guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/README.md) or [full documentation](https://jokob-sk.github.io/NetAlertX/).
For Home Assistant users: [Click here to add NetAlertX](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
For other install methods, check the [installation docs](#-documentation)
Head to [https://netalertx.com/](https://netalertx.com/) for even more gifs and screenshots 📷.
</details>
<details>
<summary>❓ Why use PiAlert?</summary>
## 📦 Features
<hr>
### Scanners
Most of us don't know what's going on on our home network, but we want our family and data to be safe. _Command-line tools_ are great, but the output can be _hard to understand_ and action if you are not a network specialist.
The app scans your network for **New devices**, **New connections** (re-connections), **Disconnections**, **"Always Connected" devices down**, Devices **IP changes** and **Internet IP address changes**. Discovery & scan methods include: **arp-scan**, **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**, **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) docs for a full list of avaliable plugins.
PiAlert gives you peace of mind. _Visualize and immediately report 📬_ what is going on in your network - this is the first step to enhance your _network security 🔐_.
### Notification gateways
PiAlert combines several network and other scanning tools 🔍 with notifications 📧 into one user-friendly package 📦.
Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use native [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/) publishers.
Set up a _kill switch ☠_ for your network via a smart plug with the available [Home Assistant](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/HOME_ASSISTANT.md) integration. Implement custom automations with the [CSV device Exports 📤](https://github.com/jokob-sk/Pi.Alert/tree/main/front/plugins/csv_backup), [Webhooks](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/WEBHOOK_N8N.md), or [API endpoints](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/API.md) features.
### Integrations and Plugins
Extend the app if you want to create your own scanner [Plugin](https://github.com/jokob-sk/Pi.Alert/tree/main/front/plugins#readme) and handle the results and notifications in PiAlert.
Feed your data and device changes into [Home Assistant](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HOME_ASSISTANT.md), read [API endpoints](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md), or use [Webhooks](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEBHOOK_N8N.md) to setup custom automation flows. You can also
build your own scanners with the [Plugin system](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) in as little as [15 minutes](https://www.youtube.com/watch?v=cdbxlwiWhv8).
Looking forward to your contributions if you decide to share your work with the community ❤.
### Workflows
</details>
## Scan Methods, Notifications, Integration, Extension system
| Features | Details |
|-------------|-------------|
| 🔍 | The app scans your network for, **New devices**, **New connections** (re-connections), **Disconnections**, **"Always Connected" devices down**, Devices **IP changes** and **Internet IP address changes**. Discovery & scan methods include: **arp-scan**. **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**. **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://github.com/jokob-sk/Pi.Alert/tree/main/front/plugins#readme) docs for more info on individual scans. |
|📧 | Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/). |
|🧩 | Feed your data and device changes into [Home Assistant](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/HOME_ASSISTANT.md), read [API endpoints](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/API.md), or use [Webhooks](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/WEBHOOK_N8N.md) to setup custom automation flows. |
|➕ | Build your own scanners with the [Plugin system](https://github.com/jokob-sk/Pi.Alert/tree/main/front/plugins#readme) |
The [workflows module](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WORKFLOWS.md) allows to automate repetitive tasks, making network management more efficient. Whether you need to assign newly discovered devices to a specific Network Node, auto-group devices from a given vendor, unarchive a device if detected online, or automatically delete devices, this module provides the flexibility to tailor the automations to your needs.
- I don't get burned out and the app survives longer🔥🤯
- Regular updates to keep your data and family safe 🔄
- Better and more functionality➕
- Quicker and better support with issues 🆘
...or explore all the [documentation here](https://jokob-sk.github.io/NetAlertX/).
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
| --- | --- | --- |
## 🔐 Security & Privacy
NetAlertX scans your local network and can store metadata about connected devices. By default, all data is stored **locally**. No information is sent to external services unless you explicitly configure notifications or integrations.
To further secure your installation:
- Run it behind a reverse proxy with authentication
- Use firewalls to restrict access to the web UI
- Regularly update to the latest version for security patches
See [Security Best Practices](https://github.com/jokob-sk/NetAlertX/security) for more details.
## ❓ FAQ
**Q: Why don’t I see any devices?**
A: Ensure the container has proper network access (e.g., use `--network host` on Linux). Also check that your scan method is properly configured in the UI.
**Q: Does this work on Wi-Fi-only devices like Raspberry Pi?**
A: Yes, but some scanners (e.g. ARP) work best on Ethernet. For Wi-Fi, try SNMP, DHCP, or Pi-hole import.
**Q: Will this send any data to the internet?**
A: No. All scans and data remain local, unless you set up cloud-based notifications.
**Q: Can I use this without Docker?**
A: Yes! You can install it bare-metal. See the [bare metal installation guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md).
**Q: Where is the data stored?**
A: In the `/data/config` and `/data/db` folders. Back up these folders regularly.
## 🐞 Known Issues
- Some scanners (e.g. ARP) may not detect devices on different subnets. See the [Remote networks guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REMOTE_NETWORKS.md) for workarounds.
- Wi-Fi-only networks may require alternate scanners for accurate detection.
- Notification throttling may be needed for large networks to prevent spam.
- On some systems, elevated permissions (like `CAP_NET_RAW`) may be needed for low-level scanning.
Check the [GitHub Issues](https://github.com/jokob-sk/NetAlertX/issues) for the latest bug reports and solutions and consult [the official documentation](https://jokob-sk.github.io/NetAlertX/).
Thank you to everyone who appreciates this tool and donates.
<details>
<summary>Click for more ways to donate</summary>
<hr>
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
📧 Email me at [jokob@duck.com](mailto:jokob@duck.com?subject=PiAlert) if you want to get in touch or if I should add other sponsorship platforms.
📧 Email me at [jokob@duck.com](mailto:jokob@duck.com?subject=NetAlertX) if you want to get in touch or if I should add other sponsorship platforms.
</details>
### 🏗 Contributors
This project would be nothing without the amazing work of the community, with special thanks to:
### ⭐ Sponsors
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up), [vladaurosh](https://github.com/vladaurosh) for Alpine re-base help, [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few. Check out all the [amazing contributors](https://github.com/jokob-sk/NetAlertX/graphs/contributors).
Thank you to all the wonderful people who are sponsoring this project (=preventing my burnout🔥🤯):
<!-- SPONSORS-LIST DO NOT MODIFY BELOW -->
| All Sponsors |
|---|
<!-- SPONSORS-LIST DO NOT MODIFY ABOVE -->
### 🙏Contributors
This project would be nothing without the amazing work of the community, with special thanks to:
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up) [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few...
Here is everyone that helped and contributed to this project:
Proudly using [Weblate](https://hosted.weblate.org/projects/pialert/).
Proudly using [Weblate](https://hosted.weblate.org/projects/pialert/). Help out and suggest languages in the [online portal of Weblate](https://hosted.weblate.org/projects/pialert/core/).
Help out and suggest languages in the [online portal of Weblate](https://hosted.weblate.org/projects/pialert/core/).
### License
> GPL 3.0 | [Read more here](LICENSE.txt) | Source of the [animated GIF (Loading Animation)](https://commons.wikimedia.org/wiki/File:Loading_Animation.gif) | Source of the [selfhosted Fonts](https://github.com/adobe-fonts/source-sans)
#18 17.77 Building wheels for collected packages: python-nmap, yattag, aiofreepybox
#18 17.77 Building wheel for python-nmap (pyproject.toml): started
#18 17.95 Building wheel for python-nmap (pyproject.toml): finished with status 'done'
#18 17.96 Created wheel for python-nmap: filename=python_nmap-0.7.1-py2.py3-none-any.whl size=20679 sha256=ecd9b14109651cfaa5bf035f90076b9442985cc254fa5f8a49868fc896e86edb
#18 17.96 Stored in directory: /root/.cache/pip/wheels/06/fc/d4/0957e1d9942e696188208772ea0abf909fe6eb3d9dff6e5a9e
#18 17.96 Building wheel for yattag (pyproject.toml): started
#18 18.14 Building wheel for yattag (pyproject.toml): finished with status 'done'
#18 18.14 Created wheel for yattag: filename=yattag-1.16.1-py3-none-any.whl size=15930 sha256=2135fc2034a3847c81eb6a0d7b85608e8272339fa5c1961f87b02dfe6d74d0ad
#18 18.14 Stored in directory: /root/.cache/pip/wheels/d2/2f/52/049ff4f7c8c9c932b2ece7ec800d7facf2a141ac5ab0ce7e51
#18 18.15 Building wheel for aiofreepybox (pyproject.toml): started
#18 18.36 Building wheel for aiofreepybox (pyproject.toml): finished with status 'done'
#18 18.36 Created wheel for aiofreepybox: filename=aiofreepybox-6.0.0-py3-none-any.whl size=60051 sha256=dbdee5350b10b6550ede50bc779381b7f39f1e5d5da889f2ee98cb5a869d3425
#18 18.36 Stored in directory: /tmp/pip-ephem-wheel-cache-93bgc4e2/wheels/3c/d3/ae/fb97a84a29a5fbe8517de58d67e66586505440af35981e0dd3
#18 18.36 Successfully built python-nmap yattag aiofreepybox
- The initial scan can take up to 15min (with 50 devices and MQTT). Subsequent ones 3 and 5 minutes so wait that long for all of the scans to run.
### Docker environment variables
| Variable | Description | Default |
| :------------- |:-------------| -----:|
| `PORT` |Port of the web interface | `20211` |
| `LISTEN_ADDR` |Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. | `0.0.0.0` |
|`TZ` |Time zone to display stats correctly. Find your time zone [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) | `Europe/Berlin` |
|`HOST_USER_GID` |User ID (UID) to map the user in the container to a server user with sufficient read&write permissions on the mapped files | `1000` |
|`HOST_USER_ID` |User Group ID (GID) to map the user group in the container to a server user group with sufficient read&write permissions on the mapped files | `1000` |
|`ALWAYS_FRESH_INSTALL` | Setting `ALWAYS_FRESH_INSTALL=true` will delete the content of the `/db`&`/config` folders. For testing purposes. Can be coupled with [watchtower](https://github.com/containrrr/watchtower) to have an always freshly installed `pi.alert`/`_dev` image. | `N/A` |
### Docker paths
> [!NOTE]
> See also [Backup strategies](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/BACKUPS.md).
| ✅ | `:/home/pi/pialert/config` | Folder which will contain the `pialert.conf`&`devices.csv` ([read about devices.csv](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DEVICES_BULK_EDITING.md)) files (see below for details) |
| ✅ | `:/home/pi/pialert/db` | Folder which will contain the `pialert.db` file |
| | `:/home/pi/pialert/front/log` | Logs folder useful for debugging if you have issues setting up the container |
| | `:/etc/pihole/pihole-FTL.db` | PiHole's `pihole-FTL.db` database file. Required if you want to use PiHole DB mapping. |
| | `:/etc/pihole/dhcp.leases` | PiHole's `dhcp.leases` file. Required if you want to use PiHole `dhcp.leases` file. This has to be matched with a corresponding `DHCPLSS_paths_to_check` setting entry (the path in the container must contain `pihole`)|
| | `:/home/pi/pialert/front/api` | A simple [API endpoint](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/API.md) containing static (but regularly updated) json and other files. |
| | `:/home/pi/pialert/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/Pi.Alert/blob/main/front/plugins/README.md). |
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/REVERSE_DNS.md). |
### Modify the config (`pialert.conf`) only if UI is not available
- The preferred way is to manage the configuration via the Settings section in the UI.
- You can modify [pialert.conf](https://github.com/jokob-sk/Pi.Alert/tree/main/config) directly, if needed.
- If unavailable, the app generates a default `pialert.conf` and `pialert.db` file on the first run.
#### Important settings
These are the most important settings to get at least some output in your Devices screen. Usually, only one approach is used, but you should be able to combine these approaches.
##### For arp-scan: ARPSCAN_RUN, SCAN_SUBNETS
- ❗ To use the arp-scan method, you need to set the `SCAN_SUBNETS` variable. See the documentation on how [to setup SUBNETS, VLANs & limitations](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/SUBNETS.md)
##### For pihole: PIHOLE_RUN, DHCPLSS_RUN
There are 2 approaches how to get PiHole devices imported. Via the PiHole import (PIHOLE) plugin or DHCP leases (DHCPLSS) plugin.
**PiHole (Device sync)**
*`PIHOLE_RUN`: You need to map `:/etc/pihole/pihole-FTL.db` in the `docker-compose.yml` file if you enable this setting.
**DHCP Leases (Device import)**
*`DHCPLSS_RUN`: You need to map `:/etc/pihole/dhcp.leases` in the `docker-compose.yml` file if you enable this setting.
* The above setting has to be matched with a corresponding `DHCPLSS_paths_to_check` setting entry (the path in the container must contain `pihole` as PiHole uses a different format of the `dhcp.leases` file).
> [!NOTE]
> It's recommended to use the same schedule interval for all plugins responsible for discovering new devices.
#### 🧭 Community guides
Use the official installation guides at first and use community content as suplementary material. Open an issue if you'd like to add your link to the list 🙏
- 📄 [How to Install Pi.Alert on Your Synology NAS - Marius hosting (English)](https://mariushosting.com/how-to-install-pi-alert-on-your-synology-nas/) (Updated frequently)
- 📄 [Using the PiAlert Network Security Scanner on a Raspberry Pi - PiMyLifeUp (English)](https://pimylifeup.com/raspberry-pi-pialert/)
- ▶ [How to Setup Pi.Alert on Your Synology NAS - Digital Aloha (English)](https://www.youtube.com/watch?v=M4YhpuRFaUg)
- 📄 [시놀/헤놀에서 네트워크 스캐너 Pi.Alert Docker로 설치 및 사용하기 (Korean)](https://blog.dalso.org/article/%EC%8B%9C%EB%86%80-%ED%97%A4%EB%86%80%EC%97%90%EC%84%9C-%EB%84%A4%ED%8A%B8%EC%9B%8C%ED%81%AC-%EC%8A%A4%EC%BA%90%EB%84%88-pi-alert-docker%EB%A1%9C-%EC%84%A4%EC%B9%98-%EB%B0%8F-%EC%82%AC%EC%9A%A9) (July 2023)
- ▶ [Pi.Alert auf Synology & Docker by - Jürgen Barth (German)](https://www.youtube.com/watch?v=-ouvA2UNu-A) (March 2023)
- ▶ [Top Docker Container for Home Server Security - VirtualizationHowto (English)](https://www.youtube.com/watch?v=tY-w-enLF6Q) (March 2023)
- ▶ [Pi.Alert or WatchYourLAN can alert you to unknown devices appearing on your WiFi or LAN network - Danie van der Merwe (English)](https://www.youtube.com/watch?v=v6an9QG2xF0) (November 2022)
> Ordered by last update time.
### **Common issues**
💡 Before creating a new issue, please check if a similar issue was [already resolved](https://github.com/jokob-sk/Pi.Alert/issues?q=is%3Aissue+is%3Aclosed).
⚠ Check also common issues and [debugging tips](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DEBUG_TIPS.md).
> [!NOTE]
> You can bulk-update devices via the [CSV import method](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DEVICES_BULK_EDITING.md).
## 📄 docker-compose.yml Examples
### Example 1
```yaml
version:"3"
services:
pialert:
container_name:pialert
# use the below line if you want to test the latest dev image
# (optional) useful for debugging if you have issues setting up the container
- ${LOGS_LOCATION}:/home/pi/pialert/front/log
environment:
- TZ=${TZ}
- HOST_USER_ID=${HOST_USER_ID}
- HOST_USER_GID=${HOST_USER_GID}
- PORT=${PORT}
```
`.env` file
```yaml
#GLOBAL PATH VARIABLES
APP_DATA_LOCATION=/path/to/docker_appdata
APP_CONFIG_LOCATION=/path/to/docker_config
LOGS_LOCATION=/path/to/docker_logs
#ENVIRONMENT VARIABLES
TZ=Europe/Paris
HOST_USER_ID=1000
HOST_USER_GID=1000
PORT=20211
#DEVELOPMENT VARIABLES
DEV_LOCATION=/path/to/local/source/code
```
To run the container execute: `sudo docker-compose --env-file /path/to/.env up`
### Example 4
Courtesy of [pbek](https://github.com/pbek). The volume `pialert_db` is used by the db directory. The two config files are mounted directly from a local folder to their places in the config folder. You can backup the `docker-compose.yaml` folder and the docker volumes folder.
```yaml
pialert:
# use the below line if you want to test the latest dev image
Big thanks to <ahref="https://github.com/Macleykun">@Macleykun</a>& for help and tips&tricks for Dockerfile(s) and <ahref="https://github.com/vladaurosh">@vladaurosh</a> for Alpine re-base help.
## ❤ Support me
Get:
- Regular updates to keep your data and family safe 🔄
- Better and more functionality➕
- I don't get burned out and the app survives longer🔥🤯
- Quicker and better support with issues 🆘
- Less grumpy me 😄
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
- El escaneo inicial puede tardar hasta 15 minutos (con 50 dispositivos y MQTT). Los siguientes pueden durar entre 3 y 5 minutos, así que espere a que se ejecuten todos los escaneos.
### Variables de entorno Docker
| Variable | Descripción | Predeterminado |
| :------------- |:-------------| -----:|
| `PORT` |Puerto de la interfaz web | `20211` |
|`TZ` |Zona horaria para mostrar correctamente las estadísticas. Encuentre su zona horaria [aquí](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) | `Europe/Berlin` |
|`HOST_USER_GID` |ID de usuario (UID) para asignar el usuario del contenedor a un usuario del servidor con suficientes permisos de lectura y escritura en los archivos asignados | `1000` |
|`HOST_USER_ID` |ID de grupo de usuarios (GID) para asignar el grupo de usuarios del contenedor a un grupo de usuarios del servidor con suficientes permisos de lectura y escritura en los archivos asignados | `1000` |
| **Obligatorio** | `:/home/pi/pialert/config` | Carpeta que contendrá el archivo `pialert.conf` (para más detalles, véase más abajo) |
| **Obligatorio** | `:/home/pi/pialert/db` | Carpeta que contendrá el archivo `pialert.db` |
|Opcional| `:/home/pi/pialert/front/log` | Carpeta de registros útil para depurar si tiene problemas al configurar el contenedor |
|Opcional| `:/etc/pihole/pihole-FTL.db` | Archivo de base de datos `pihole-FTL.db` de PiHole. Necesario si desea utilizar PiHole |
|Opcional| `:/etc/pihole/dhcp.leases` | Archivo `dhcp.leases` de PiHole. Obligatorio si desea utilizar el archivo `dhcp.leases` de PiHole. Tiene que coincidir con la correspondiente entrada de configuración `DHCPLSS_paths_to_check`. (La ruta en el contenedor debe contener `pihole`)|
|Opcional| `:/home/pi/pialert/front/api` | Una simple [API endpoint](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/API.md) que contiene archivos json estáticos (pero actualizados regularmente) y otros archivos. |
### Configurar (`pialert.conf`)
- Si no está disponible, la aplicación genera un archivo `pialert.conf` y `pialert.db` por defecto en la primera ejecución.
- La forma preferida es gestionar la configuración a través de la sección "Configuración" de la interfaz de usuario.
- Puede modificar [pialert.conf](https://github.com/jokob-sk/Pi.Alert/tree/main/config) directamente, si es necesario.
#### Ajustes importantes
Estos son los ajustes más importantes para obtener al menos alguna salida en la pantalla de tus Dispositivos. Por lo general, sólo se utiliza un enfoque, pero usted debe ser capaz de combinar estos enfoques.
##### Para arp-scan: ARPSCAN_RUN, SCAN_SUBNETS
- ❗ Para usar el método arp-scan, necesitas configurar la variable `SCAN_SUBNETS`. Consulte la documentación sobre cómo [configurar SUBNETS, VLANs y limitaciones](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/SUBNETS.md)
##### Para pihole: PIHOLE_RUN, DHCPLSS_RUN
Hay dos maneras de importar dispositivos PiHole. A través del plugin de importación PiHole (PIHOLE) o del plugin DHCP leases (DHCPLSS).
**PiHole(Sincronización de dispositivos)**
* `PIHOLE_RUN`:Necesitas mapear `:/etc/pihole/pihole-FTL.db` en el fichero `docker-compose.yml` si activas esta opción.
**DHCPLeases (Importación de dispositivos)**
* `DHCPLSS_RUN`:Es necesario mapear `:/etc/pihole/dhcp.leases` en el fichero `docker-compose.yml` si se activa esta opción.
*La configuración anterior tiene que coincidir con una entrada de configuración correspondiente `DHCPLSS_paths_to_check` (la ruta en el contenedor debe contener `pihole` ya que PiHole utiliza un formato diferente del archivo `dhcp.leases`).
> Se recomienda utilizar el mismo intervalo de programación para todos los plugins responsables de descubrir nuevos dispositivos.
### **Problemas comunes**
💡 Antes de crear una nueva incidencia, comprueba si ya se ha resuelto una [incidencia similar](https://github.com/jokob-sk/Pi.Alert/issues?q=is%3Aissue+is%3Aclosed).
⚠ Compruebe también los problemas comunes y los [consejos de depuración](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DEBUG_TIPS.md).
## 📄 Ejemplos
### Ejemplo 1
```yaml
version:"3"
services:
pialert:
container_name:pialert
# Utilice la siguiente línea si desea probar la última imagen de desarrollo
# (optional) useful for debugging if you have issues setting up the container
- ${LOGS_LOCATION}:/home/pi/pialert/front/log
environment:
- TZ=${TZ}
- HOST_USER_ID=${HOST_USER_ID}
- HOST_USER_GID=${HOST_USER_GID}
- PORT=${PORT}
```
`.env` file
```yaml
#VARIABLES DE RUTA GLOBAL
APP_DATA_LOCATION=/path/to/docker_appdata
APP_CONFIG_LOCATION=/path/to/docker_config
LOGS_LOCATION=/path/to/docker_logs
#VARIABLES DE ENTORNO
TZ=Europe/Paris
HOST_USER_ID=1000
HOST_USER_GID=1000
PORT=20211
#VARIABLES DE DESARROLLO
DEV_LOCATION=/path/to/local/source/code
```
Para ejecutar el contenedor ejecute: `sudo docker-compose --env-file /path/to/.env up`
### Example 4
Por cortesía de [pbek](https://github.com/pbek). El volumen `pialert_db` es utilizado por el directorio db. Los dos archivos de configuración se montan directamente desde una carpeta local a sus lugares en la carpeta config. Puedes hacer una copia de seguridad de la carpeta `docker-compose.yaml` y de la carpeta docker volumes.
```yaml
pialert:
# Utilice la siguiente línea si desea probar la última imagen de desarrollo
<ahref="https://github.com/sponsors/jokob-sk"target="_blank"><imgsrc="https://i.imgur.com/X6p5ACK.png"alt="Sponsor Me on GitHub"style="height: 30px !important;width: 117px !important;"width="150px"></a>
<ahref="https://www.buymeacoffee.com/jokobsk"target="_blank"><imgsrc="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png"alt="Buy Me A Coffee"style="height: 30px !important;width: 117px !important;"width="117px"height="30px"></a>
<ahref="https://www.patreon.com/user?u=84385063"target="_blank"><imgsrc="https://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Patreon_logo_with_wordmark.svg/512px-Patreon_logo_with_wordmark.svg.png"alt="Support me on patreon"style="height: 30px !important;width: 117px !important;"width="117px"></a>
PiAlert comes with a simple API. These API endpoints are static files, that are periodically updated based on your settings.
This API provides programmatic access to **devices, events, sessions, metrics, network tools, and sync** in NetAlertX. It is implemented as a **REST and GraphQL server**. All requests require authentication via **API Token** (`API_TOKEN` setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a `Authorization: Bearer API_TOKEN` header as per example below:
### When are the endpoints updated
The endpoints are updated when objects in the API endpoints are changed.
### Location of the endpoints
In the container, these files are located under the `/home/pi/pialert/front/api/` folder and thus on the `<pialert_url>/api/<File name>` url.
### Available endpoints
You can access the following files:
| File name | Description |
|----------------------|----------------------|
| `notification_text.txt` | The plain text version of the last notification. |
| `notification_text.html` | The full HTML of the last email notification. |
| `notification_json_final.json` | The json version of the last notification (e.g. used for webhooks - [sample JSON](https://github.com/jokob-sk/Pi.Alert/blob/main/back/webhook_json_sample.json)). |
| `table_devices.json` | The current (at the time of the last update as mentioned above on this page) state of all of the available Devices detected by the app. |
| `table_pholus_scan.json` | The latest state of the [pholus](https://github.com/jokob-sk/Pi.Alert/tree/main/pholus) (A multicast DNS and DNS Service Discovery Security Assessment Tool) scan results. |
| `table_plugins_events.json` | The list of the unprocessed (pending) notification events (plugins_events DB table). |
| `table_plugins_history.json` | The list of notification events history. |
| `table_plugins_objects.json` | The content of the plugins_objects table. Find more info on the [Plugin system here](https://github.com/jokob-sk/Pi.Alert/tree/main/front/plugins)|
| `language_strings.json` | The content of the language_strings table, which in turn is loaded from the plugins `config.json` definitions. |
| `table_custom_endpoint.json` | A custom endpoint generated by the SQL query specified by the `API_CUSTOM_SQL` setting. |
| `table_settings.json` | The content of the settings table. |
| `app_state.json` | Contains the current application state. |
Current/latest state of the aforementioned files depends on your settings.
### JSON Data format
The endpoints starting with the `table_` prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:
```JSON
{
"data":[
{
"db_column_name":"data",
"db_column_name2":"data2"
},
{
"db_column_name":"data3",
"db_column_name2":"data4"
}
]
}
```
Example JSON of the `table_devices.json` endpoint with two Devices (database rows):
The **Database Query API** provides direct, low-level access to the NetAlertX database. It allows **read, write, update, and delete** operations against tables, using **base64-encoded** SQL or structured parameters.
> [!Warning]
> This API is primarily used internally to generate and render the application UI. These endpoints are low-level and powerful, and should be used with caution. Wherever possible, prefer the [standard API endpoints](API.md). Invalid or unsafe queries can corrupt data.
> If you need data in a specific format that is not already provided, please open an issue or pull request with a clear, broadly useful use case. This helps ensure new endpoints benefit the wider community rather than relying on raw database queries.
---
## Authentication
All `/dbquery/*` endpoints require an API token in the HTTP headers:
```http
Authorization: Bearer <API_TOKEN>
```
If the token is missing or invalid:
```json
{"error":"Forbidden"}
```
---
## Endpoints
### 1. `POST /dbquery/read`
Execute a **read-only** SQL query (e.g., `SELECT`).
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/read"\
-H "Authorization: Bearer <API_TOKEN>"\
-H "Accept: application/json"\
-H "Content-Type: application/json"\
-d '{
"rawSql": "U0VMRUNUICogRlJPTSBERVZJQ0VT"
}'
```
---
### 2. `POST /dbquery/update` (safer than `/dbquery/write`)
Update rows in a table by `columnName` + `id`. `/dbquery/update` is parameterized to reduce the risk of SQL injection, while `/dbquery/write` executes raw SQL directly.
#### Request Body
```json
{
"columnName":"devMac",
"id":["AA:BB:CC:DD:EE:FF"],
"dbtable":"Devices",
"columns":["devName","devOwner"],
"values":["Laptop","Alice"]
}
```
#### Response
```json
{"success":true,"updated_count":1}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/update"\
-H "Authorization: Bearer <API_TOKEN>"\
-H "Accept: application/json"\
-H "Content-Type: application/json"\
-d '{
"columnName": "devMac",
"id": ["AA:BB:CC:DD:EE:FF"],
"dbtable": "Devices",
"columns": ["devName", "devOwner"],
"values": ["Laptop", "Alice"]
}'
```
---
### 3. `POST /dbquery/write`
Execute a **write query** (`INSERT`, `UPDATE`, `DELETE`).
Manage a **single device** by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require **authorization** via Bearer token.
---
## 1. Retrieve Device Details
* **GET** `/device/<mac>`
Fetch all details for a single device, including:
* Computed status (`devStatus`) → `On-line`, `Off-line`, or `Down`
* Session and event counts (`devSessions`, `devEvents`, `devDownAlerts`)
* Presence hours (`devPresenceHours`)
* Children devices (`devChildrenDynamic`) and NIC children (`devChildrenNicsDynamic`)
**Special case**: `mac=new` returns a template for a new device with default values.
**Response** (success):
```json
{
"devMac":"AA:BB:CC:DD:EE:FF",
"devName":"Net - Huawei",
"devOwner":"Admin",
"devType":"Router",
"devVendor":"Huawei",
"devStatus":"On-line",
"devSessions":12,
"devEvents":5,
"devDownAlerts":1,
"devPresenceHours":32,
"devChildrenDynamic":[...],
"devChildrenNicsDynamic":[...],
...
}
```
**Error Responses**:
* Device not found → HTTP 404
* Unauthorized → HTTP 403
---
## 2. Update Device Fields
* **POST** `/device/<mac>`
Create or update a device record.
**Request Body**:
```json
{
"devName":"New Device",
"devOwner":"Admin",
"createNew":true
}
```
**Behavior**:
* If `createNew=true` → creates a new device
* Otherwise → updates existing device fields
**Response**:
```json
{
"success":true
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
## 3. Delete a Device
* **DELETE** `/device/<mac>/delete`
Deletes the device with the given MAC.
**Response**:
```json
{
"success":true
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
## 4. Delete All Events for a Device
* **DELETE** `/device/<mac>/events/delete`
Removes all events associated with a device.
**Response**:
```json
{
"success":true
}
```
---
## 5. Reset Device Properties
* **POST** `/device/<mac>/reset-props`
Resets the device's custom properties to default values.
**Request Body**: Optional JSON for additional parameters.
**Response**:
```json
{
"success":true
}
```
---
## 6. Copy Device Data
* **POST** `/device/copy`
Copy all data from one device to another. If a device exists with `macTo`, it is replaced.
**Request Body**:
```json
{
"macFrom":"AA:BB:CC:DD:EE:FF",
"macTo":"11:22:33:44:55:66"
}
```
**Response**:
```json
{
"success":true,
"message":"Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66"
}
```
**Error Responses**:
* Missing `macFrom` or `macTo` → HTTP 400
* Unauthorized → HTTP 403
---
## 7. Update a Single Column
* **POST** `/device/<mac>/update-column`
Update one specific column for a device.
**Request Body**:
```json
{
"columnName":"devName",
"columnValue":"Updated Device Name"
}
```
**Response** (success):
```json
{
"success":true
}
```
**Error Responses**:
* Device not found → HTTP 404
* Missing `columnName` or `columnValue` → HTTP 400
* Unauthorized → HTTP 403
---
## Example `curl` Requests
**Get Device Details**:
```bash
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF"\
-H "Authorization: Bearer <API_TOKEN>"
```
**Update Device Fields**:
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF"\
The Devices Collection API provides operations to **retrieve, manage, import/export, and filter devices** in bulk. All endpoints require **authorization** via Bearer token.
---
## Endpoints
### 1. Get All Devices
* **GET** `/devices`
Retrieves all devices from the database.
**Response** (success):
```json
{
"success":true,
"devices":[
{
"devName":"Net - Huawei",
"devMAC":"AA:BB:CC:DD:EE:FF",
"devIP":"192.168.1.1",
"devType":"Router",
"devFavorite":0,
"devStatus":"online"
},
...
]
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
### 2. Delete Devices by MAC
* **DELETE** `/devices`
Deletes devices by MAC address. Supports exact matches or wildcard `*`.
**Request Body**:
```json
{
"macs":["AA:BB:CC:DD:EE:FF","11:22:33:*"]
}
```
**Behavior**:
* If `macs` is omitted or `null` → deletes **all devices**.
* Wildcards `*` match multiple devices.
**Response**:
```json
{
"success":true,
"deleted_count":5
}
```
**Error Responses**:
* Unauthorized → HTTP 403
---
### 3. Delete Devices with Empty MACs
* **DELETE** `/devices/empty-macs`
Removes all devices where MAC address is null or empty.
**Response**:
```json
{
"success":true,
"deleted":3
}
```
---
### 4. Delete Unknown Devices
* **DELETE** `/devices/unknown`
Deletes devices with names marked as `(unknown)` or `(name not found)`.
**Response**:
```json
{
"success":true,
"deleted":2
}
```
---
### 5. Export Devices
* **GET** `/devices/export` or `/devices/export/<format>`
Exports all devices in **CSV** (default) or **JSON** format.
**Query Parameter / URL Parameter**:
*`format` (optional) → `csv` (default) or `json`
**CSV Response**:
* Returns as a downloadable CSV file: `Content-Disposition: attachment; filename=devices.csv`
GraphQL queries are **read-optimized for speed**. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:
* Devices
* Settings
* Language Strings (LangStrings)
## Endpoints
* **GET** `/graphql`
Returns a simple status message (useful for browser or debugging).
* **POST** `/graphql`
Execute GraphQL queries against the `devicesSchema`.
"setDescription":"Types of devices considered as network infrastructure.",
"setType":"list",
"setOptions":"[\"Router\",\"Switch\",\"AP\"]",
"setGroup":"Network",
"setValue":"[\"Router\",\"Switch\"]",
"setEvents":null,
"setOverriddenByEnv":true
}
],
"count":2
}
}
}
```
---
## LangStrings Query
The **LangStrings query** provides access to localized strings. Supports filtering by `langCode` and `langStringKey`. If the requested string is missing or empty, you can optionally fallback to `en_us`.
"langStringText":"Other, non-device scanner plugins that are currently enabled."// falls back to en_us if empty
}
]
}
}
}
```
---
## Notes
* Device, settings, and LangStrings queries can be combined in **one request** since GraphQL supports batching.
* The `fallback_to_en` feature ensures UI always has a value even if a translation is missing.
* Data is **cached in memory** per JSON file; changes to language or plugin files will only refresh after the cache detects a file modification.
* The `setOverriddenByEnv` flag helps identify setting values that are locked at container runtime.
* The schema is **read-only** — updates must be performed through other APIs or configuration management. See the other [API](API.md) endpoints for details.
Manage or purge application log files stored under `/app/log` and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.
Only specific, pre-approved log files can be purged for security and stability reasons.
---
## Delete (Purge) a Log File
* **DELETE** `/logs?file=<log_file>` → Purge the contents of an allowed log file.
**Query Parameter:**
*`file` → The name of the log file to purge (e.g., `app.log`, `stdout.log`)
**Allowed Files:**
```
app.log
app_front.log
IP_changes.log
stdout.log
stderr.log
app.php_errors.log
execution_queue.log
db_is_locked.log
```
**Authorization:**
Requires a valid API token in the `Authorization` header.
The Net Tools API provides **network diagnostic utilities**, including Wake-on-LAN, traceroute, speed testing, DNS resolution, nmap scanning, and internet connection information.
All endpoints require **authorization** via Bearer token.
---
## Endpoints
### 1. Wake-on-LAN
* **POST** `/nettools/wakeonlan`
Sends a Wake-on-LAN packet to wake a device.
**Request Body** (JSON):
```json
{
"devMac":"AA:BB:CC:DD:EE:FF"
}
```
**Response** (success):
```json
{
"success":true,
"message":"WOL packet sent",
"output":"Sent magic packet to AA:BB:CC:DD:EE:FF"
}
```
**Error Responses**:
* Invalid MAC address → HTTP 400
* Command failure → HTTP 500
---
### 2. Traceroute
* **POST** `/nettools/traceroute`
Performs a traceroute to a specified IP address.
**Request Body**:
```json
{
"devLastIP":"192.168.1.1"
}
```
**Response** (success):
```json
{
"success":true,
"output":"traceroute output as string"
}
```
**Error Responses**:
* Invalid IP → HTTP 400
* Traceroute command failure → HTTP 500
---
### 3. Speedtest
* **GET** `/nettools/speedtest`
Runs an internet speed test using `speedtest-cli`.
**Response** (success):
```json
{
"success":true,
"output":[
"Ping: 15 ms",
"Download: 120.5 Mbit/s",
"Upload: 22.4 Mbit/s"
]
}
```
**Error Responses**:
* Command failure → HTTP 500
---
### 4. DNS Lookup (nslookup)
* **POST** `/nettools/nslookup`
Resolves an IP address or hostname using `nslookup`.
**Request Body**:
```json
{
"devLastIP":"8.8.8.8"
}
```
**Response** (success):
```json
{
"success":true,
"output":[
"Server: 8.8.8.8",
"Address: 8.8.8.8#53",
"Name: google-public-dns-a.google.com"
]
}
```
**Error Responses**:
* Missing or invalid `devLastIP` → HTTP 400
* Command failure → HTTP 500
---
### 5. Nmap Scan
* **POST** `/nettools/nmap`
Runs an nmap scan on a target IP address or range.
**Request Body**:
```json
{
"scan":"192.168.1.0/24",
"mode":"fast"
}
```
**Supported Modes**:
| Mode | nmap Arguments |
| --------------- | -------------- |
| `fast` | `-F` |
| `normal` | default |
| `detail` | `-A` |
| `skipdiscovery` | `-Pn` |
**Response** (success):
```json
{
"success":true,
"mode":"fast",
"ip":"192.168.1.0/24",
"output":[
"Starting Nmap 7.91",
"Host 192.168.1.1 is up",
"... scan results ..."
]
}
```
**Error Responses**:
* Invalid IP → HTTP 400
* Invalid mode → HTTP 400
* Command failure → HTTP 500
---
### 6. Internet Connection Info
* **GET** `/nettools/internetinfo`
Fetches public internet connection information using `ipinfo.io`.
**Response** (success):
```json
{
"success":true,
"output":"IP: 203.0.113.5 City: Sydney Country: AU Org: Example ISP"
}
```
**Error Responses**:
* Failed request or empty response → HTTP 500
---
## Example `curl` Requests
**Wake-on-LAN**:
```sh
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/wakeonlan"\
-H "Authorization: Bearer <API_TOKEN>"\
-H "Content-Type: application/json"\
--data '{"devMac":"AA:BB:CC:DD:EE:FF"}'
```
**Traceroute**:
```sh
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/traceroute"\
> Some of these endpoints will be deprecated soon. Please refere to the new [API](API.md) endpoints docs for details on the new API layer.
NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the `API_TOKEN` settings as authorization bearer, for example:
- The `query` parameter contains the GraphQL query as a string.
- The `variables` parameter contains the input variables for the query.
2.**Query Variables**:
-`page`: Specifies the page number of results to fetch.
-`limit`: Specifies the number of results per page.
-`sort`: Specifies the sorting options, with `field` being the field to sort by and `order` being the sort order (`asc` for ascending or `desc` for descending).
-`search`: A search term to filter the devices.
-`status`: The status filter to apply (valid values are `my_devices` (determined by the `UI_MY_DEVICES` setting), `connected`, `favorites`, `new`, `down`, `archived`, `offline`).
3.**`curl` Command**:
- The `-X POST` option specifies that we are making a POST request.
- The `-H "Content-Type: application/json"` option sets the content type of the request to JSON.
- The `-d` option provides the request payload, which includes the GraphQL query and variables.
### Sample Response
The response will be in JSON format, similar to the following:
```json
{
"data":{
"devices":{
"devices":[
{
"rowid":1,
"devMac":"00:11:22:33:44:55",
"devName":"Device 1",
"devOwner":"Owner 1",
"devType":"Type 1",
"devVendor":"Vendor 1",
"devLastConnection":"2025-01-01T00:00:00Z",
"devStatus":"connected"
},
{
"rowid":2,
"devMac":"66:77:88:99:AA:BB",
"devName":"Device 2",
"devOwner":"Owner 2",
"devType":"Type 2",
"devVendor":"Vendor 2",
"devLastConnection":"2025-01-02T00:00:00Z",
"devStatus":"connected"
}
],
"count":2
}
}
}
```
## API Endpoint: JSON files
This API endpoint retrieves static files, that are periodically updated.
- Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
### When are the endpoints updated
The endpoints are updated when objects in the API endpoints are changed.
### Location of the endpoints
In the container, these files are located under the API directory (default: `/tmp/api/`, configurable via `NETALERTX_API` environment variable). You can access them via the `/php/server/query_json.php?file=user_notifications.json` endpoint.
### Available endpoints
You can access the following files:
| File name | Description |
|----------------------|----------------------|
| `notification_json_final.json` | The json version of the last notification (e.g. used for webhooks - [sample JSON](https://github.com/jokob-sk/NetAlertX/blob/main/front/report_templates/webhook_json_sample.json)). |
| `table_devices.json` | All of the available Devices detected by the app. |
| `table_plugins_events.json` | The list of the unprocessed (pending) notification events (plugins_events DB table). |
| `table_plugins_history.json` | The list of notification events history. |
| `table_plugins_objects.json` | The content of the plugins_objects table. Find more info on the [Plugin system here](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md)|
| `language_strings.json` | The content of the language_strings table, which in turn is loaded from the plugins `config.json` definitions. |
| `table_custom_endpoint.json` | A custom endpoint generated by the SQL query specified by the `API_CUSTOM_SQL` setting. |
| `table_settings.json` | The content of the settings table. |
| `app_state.json` | Contains the current application state. |
### JSON Data format
The endpoints starting with the `table_` prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:
```JSON
{
"data":[
{
"db_column_name":"data",
"db_column_name2":"data2"
},
{
"db_column_name":"data3",
"db_column_name2":"data4"
}
]
}
```
Example JSON of the `table_devices.json` endpoint with two Devices (database rows):
* **Port**: as configured in the `GRAPHQL_PORT` setting (`20212` by default)
---
### Example Output of the `/metrics` Endpoint
Below is a representative snippet of the metrics you may find when querying the `/metrics` endpoint for `netalertx`. It includes both aggregate counters and `device_status` labels per device.
Manage the **online history records** of devices. Currently, the API supports deletion of all history entries. All endpoints require **authorization**.
---
## 1. Delete Online History
* **DELETE** `/history`
Remove **all records** from the online history table (`Online_History`). This operation **cannot be undone**.
Retrieve application settings stored in the configuration system. This endpoint is useful for quickly fetching individual settings such as `API_TOKEN` or `TIMEZONE`.
For bulk or structured access (all settings, schema details, or filtering), use the [GraphQL API Endpoint](API_GRAPHQL.md).
---
### Get a Setting
* **GET** `/settings/<key>` → Retrieve the value of a specific setting
**Path Parameter:**
*`key` → The setting key to retrieve (e.g., `API_TOKEN`, `TIMEZONE`)
**Authorization:**
Requires a valid API token in the `Authorization` header.
* This endpoint is optimized for **direct retrieval of a single setting**.
* For **complex retrieval scenarios** (listing all settings, retrieving schema metadata like `setName`, `setDescription`, `setType`, or checking if a setting is overridden by environment variables), use the **GraphQL Settings Query**:
The `/sync` endpoint is used by the **SYNC plugin** to synchronize data between multiple NetAlertX instances (e.g., from a node to a hub). It supports both **GET** and **POST** requests.
#### 9.1 GET `/sync`
Fetches data from a node to the hub. The data is returned as a **base64-encoded JSON file**.
*`data_base64` contains the full JSON data encoded in Base64.
*`node_name` corresponds to the `SYNC_node_name` setting on the node.
* Errors (e.g., missing file) return HTTP 500 with an error message.
---
#### 9.2 POST `/sync`
The **POST** endpoint is used by nodes to **send data to the hub**. The hub expects the data as **form-encoded fields** (application/x-www-form-urlencoded or multipart/form-data). The hub then stores the data in the plugin log folder for processing.
| `data` | string | The payload from the plugin or devices. Typically **plain text**, **JSON**, or **encrypted Base64** data. In your Python script, `encrypt_data()` is applied before sending. |
| `node_name` | string | The name of the node sending the data. Matches the node’s `SYNC_node_name` setting. Used to generate the filename on the hub. |
| `plugin` | string | The name of the plugin sending the data. Determines the filename prefix (`last_result.<plugin>...`). |
| `file_path` | string (optional) | Path of the local file being sent. Used only for logging/debugging purposes on the hub; **not required for processing**. |
---
### How the Hub Processes the POST Data
1.**Receives the data** and validates the API token.
> To backup 99% of your configuration backup at least the `/config` folder. Please read the whole page (or at least "Scenario 2: Corrupted database") for details.
> To backup 99% of your configuration, backup at least the `/data/config` folder.
> Database definitions can change between releases, so the safest method is to restore backups using the **same app version** they were taken from, then upgrade incrementally.
There are 3 artifacts that can be used to backup the application:
| `/db/app.db` | The application database | Might be in an uncommitted state or corrupted |
| `/config/app.conf` | Configuration file | Can be overridden using the [`APP_CONF_OVERRIDE`](https://github.com/jokob-sk/NetAlertX/tree/main/dockerfiles#docker-environment-variables) variable |
| `/config/devices.csv` | CSV file containing device data | Does not include historical data |
Understanding where your data is stored helps you plan your backup strategy.
### Core Configuration
The core application configuration is in the `pialert.conf` file (See [Settings System](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/SETTINGS_SYSTEM.md) for details), such as:
Stored in `/data/config/app.conf`.
This includes settings for:
- Notification settings
- Scanner settings
- Scheduled maintenance settings
- UI configuration (80%)
* Notifications
* Scanning
* Scheduled maintenance
* UI preferences
### Core Device Data
(See [Settings System](./SETTINGS_SYSTEM.md) for details.)
The core device data is backed up to the `devices_<timestamp>.csv` file via the [CSV Backup `CSVBCKP` Plugin](https://github.com/jokob-sk/Pi.Alert/tree/main/front/plugins/csv_backup). This file contains data, such as:
### Device Data
- Device names
- Device Icons
- Device Network configuration
- Device categorization
Stored in `/data/config/devices_<timestamp>.csv` or `/data/config/devices.csv`, created by the [CSV Backup `CSVBCKP` Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/csv_backup).
Contains:
### Historical data
* Device names, icons, and categories
* Network configuration
* Custom properties
Historical data is stored in the `pialert.db` database (See [Database overview](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DATABASE.md) for details). This data includes:
### Historical Data
- Plugin objects
- Plugin historical entries
- History of Events, Notifications, Workflow Events
- Presence History
Stored in `/data/db/app.db` (see [Database Overview](./DATABASE.md)).
Contains:
## 🧭 Backup strategies
* Plugin data and historical entries
* Event and notification history
* Device presence history
The safest approach to backups is to backup all of the above, by taking regular file system backups (I use [Kopia](https://github.com/kopia/kopia)).
---
Arguably, the most time is spent setting up the device list, so if only one file is kept I'd recommend to have a latest backup of the `devices_<timestamp>.csv` file, followed by the `pialert.conf` file.
## Backup Strategies
### Scenario 1: Full backup
The safest approach is to back up **both** the `/db` and `/config` folders regularly. Tools like [Kopia](https://github.com/kopia/kopia) make this simple and efficient.
End-result: Full restore
If you can only keep a few files, prioritize:
#### Source artifacts:
1. The latest `devices_<timestamp>.csv` or `devices.csv`
2.`app.conf`
3.`workflows.json`
-`/db/pialert.db` (uncorrupted)
-`/config/pialert.conf`
You can also download the `app.conf` and `devices.csv` files from the **Maintenance** section:
#### Recovery:

To restore the application map the above files as described in the [Setup documentation](https://github.com/jokob-sk/Pi.Alert/blob/main/dockerfiles/README.md#docker-paths).
---
## Scenario 1: Full Backup and Restore
### Scenario 2: Corrupted database
**Goal:** Full recovery of your configuration and data.
End-result: Partial restore (historical data & configurations from the Maintenance section will be missing)
### 💾 What to Back Up
#### Source artifacts:
*`/data/db/app.db` (uncorrupted)
*`/data/config/app.conf`
*`/data/config/workflows.json`
-`/config/pialert.conf`
-`/config/devices_<timestamp>.csv` or `/config/devices.csv`
### 📥 How to Restore
#### Recovery:
Map these files into your container as described in the [Setup documentation](./DOCKER_INSTALLATION.md).
Even with a corrupted database you can recover what I would argue is 99% of the configuration (except of a couple of settings under Maintenance).
---
- map the `/config/pialert.conf` file as described in the [Setup documentation](https://github.com/jokob-sk/Pi.Alert/blob/main/dockerfiles/README.md#docker-paths).
- rename the `devices_<timestamp>.csv` to `devices.csv` and place it in the `/config` folder
- Restore the `devices.csv` backup via the [Maintenance section](https://github.com/jokob-sk/Pi.Alert/blob/main/docs/DEVICES_BULK_EDITING.md)
## Scenario 2: Corrupted Database
**Goal:** Recover configuration and device data when the database is lost or corrupted.
### 💾 What to Back Up
*`/data/config/app.conf`
*`/data/config/workflows.json`
*`/data/config/devices_<timestamp>.csv` (rename to `devices.csv` during restore)
### 📥 How to Restore
1. Copy `app.conf` and `workflows.json` into `/data/config/`
2. Rename and place `devices_<timestamp>.csv` → `/data/config/devices.csv`
3. Restore via the **Maintenance** section under *Devices → Bulk Editing*
This recovers nearly all configuration, workflows, and device metadata.
---
## Docker-Based Backup and Restore
For users running NetAlertX via Docker, you can back up or restore directly from your host system — a convenient and scriptable option.
### Full Backup (File-Level)
1.**Stop the container:**
```bash
docker stop netalertx
```
2. **Create a compressed archive** of your configuration and database volumes:
```bash
docker run --rm -v local_path/config:/config -v local_path/db:/db alpine tar -cz /config /db > netalertx-backup.tar.gz
```
3. **Restart the container:**
```bash
docker start netalertx
```
### Restore from Backup
1. **Stop the container:**
```bash
docker stop netalertx
```
2. **Restore from your backup file:**
```bash
docker run --rm -i -v local_path/config:/config -v local_path/db:/db alpine tar -C / -xz < netalertx-backup.tar.gz
```
3. **Restart the container:**
```bash
docker start netalertx
```
> This approach uses a temporary, minimal `alpine` container to access Docker-managed volumes. The `tar` command creates or extracts an archive directly from your host’s filesystem, making it fast, clean, and reliable for both automation and manual recovery.
---
## Summary
* Back up `/data/config` for configuration and devices; `/data/db` for history
* Keep regular backups, especially before upgrades
* For Docker setups, use the lightweight `alpine`-based backup method for consistency and portability
NetAlertX provides different installation methods for different needs. This guide helps you choose the right path for security, experimentation, or development.
## 1. Hardened Appliance (Default Production)
> [!NOTE]
> Use this image if: You want to use NetAlertX securely.
### Who is this for?
All users who want a stable, secure, "set-it-and-forget-it" appliance.
### Methodology
- Multi-stage Alpine build
- Aggressively "amputated"
- Locked down for max security
### Source
`Dockerfile (hardened target)`
## 2. "Tinkerer's" Image (Insecure VM-Style)
> [!NOTE]
> Use this image if: You want to experiment with NetAlertX.
### Who is this for?
Power users, developers, and "tinkerers" wanting a familiar "VM-like" experience.
> Use this image if: You want to develop NetAlertX itself.
### Who is this for?
Project contributors who are actively writing and debugging code for NetAlertX.
### Methodology
- Builds `FROM runner` stage
- Loaded by VS Code
- Full debug tools: `xdebug`, `pytest`
### Source
`Dockerfile (devcontainer target)`
# Visualizing the Trade-Offs
This chart compares the three builds across key attributes. A higher score means "more of" that attribute. Notice the clear trade-offs between security and development features.
The final images originate from two different files and build paths. The main `Dockerfile` uses stages to create *both* the hardened and development container images.
> Before troubleshooting, ensure you have set the correct [Debugging and LOG_LEVEL](./DEBUG_TIPS.md).
---
## Docker Container Doesn't Start
Initial setup issues are often caused by **missing permissions** or **incorrectly mapped volumes**. Always double-check your `docker run` or `docker-compose.yml` against the [official setup guide](./DOCKER_INSTALLATION.md) before proceeding.
### Permissions
Make sure your [file permissions](./FILE_PERMISSIONS.md) are correctly set:
* If you encounter AJAX errors, cannot write to the database, or see an empty screen, check that permissions are correct and review the logs under `/tmp/log`.
* To fix permission issues with the database, update the owner and group of `app.db` as described in the [File Permissions guide](./FILE_PERMISSIONS.md).
### Container Restarts / Crashes
* Check the logs for details. Often, required settings are missing.
* For more detailed troubleshooting, see [Debug and Troubleshooting Tips](./DEBUG_TIPS.md).
* To observe errors directly, run the container in the foreground instead of `-d`:
```bash
docker run --rm -it <your_image>
```
---
## Docker Container Starts, But the Application Misbehaves
If the container starts but the app shows unexpected behavior, the cause is often **data corruption**, **incorrect configuration**, or **unexpected input data**.
### Continuous "Loading..." Screen
A misconfigured application may display a persistent `Loading...` dialog. This is usually caused by the backend failing to start.
**Steps to troubleshoot:**
1. Check **Maintenance → Logs** for exceptions.
2. If no exception is visible, check the Portainer logs.
3. Start the container in the foreground to observe exceptions.
4. Enable `trace` or `debug` logging for detailed output (see [Debug Tips](./DEBUG_TIPS.md)).
5. Verify that `GRAPHQL_PORT` is correctly configured.
6. Check browser logs (press `F12`):
* **Console tab** → refresh the page
* **Network tab** → refresh the page
If you are unsure how to resolve errors, provide screenshots or log excerpts in your issue report or Discord discussion.
---
### Common Configuration Issues
#### Incorrect `SCAN_SUBNETS`
If `SCAN_SUBNETS` is misconfigured, you may see only a few devices in your device list after a scan. See the [Subnets Documentation](./SUBNETS.md) for proper configuration.
#### Duplicate Devices and Notifications
* Devices are identified by their **MAC address**.
* If a device's MAC changes, it will be treated as a new device, triggering notifications.
* Prevent this by adjusting your device configuration for Android, iOS, or Windows. See the [Random MACs Guide](./RANDOM_MAC.md).
#### Unable to Resolve Host
* Ensure `SCAN_SUBNETS` uses the correct mask and `--interface`.
* Refer to the [Subnets Documentation](./SUBNETS.md) for detailed guidance.
#### Invalid JSON Errors
* Follow the steps in [Invalid JSON Errors Debug Help](./DEBUG_INVALID_JSON.md).
#### Sudo Execution Fails (e.g., on arpscan on Raspberry Pi 4)
Use the official installation guides at first and use community content as supplementary material. Open an issue or PR if you'd like to add your link to the list 🙏 (Ordered by last update time)
- ▶ [Discover & Monitor Your Network with This Self-Hosted Open Source Tool - Lawrence Systems](https://www.youtube.com/watch?v=R3b5cxLZMpo) (June 2025)
- 📄 [How to Install NetAlertX on Your Synology NAS - Marius hosting](https://mariushosting.com/how-to-install-pi-alert-on-your-synology-nas/) (Updated frequently)
- 📄 [Using the PiAlert Network Security Scanner on a Raspberry Pi - PiMyLifeUp](https://pimylifeup.com/raspberry-pi-pialert/)
- ▶ [How to Setup Pi.Alert on Your Synology NAS - Digital Aloha](https://www.youtube.com/watch?v=M4YhpuRFaUg)
- 📄 [시놀/헤놀에서 네트워크 스캐너 Pi.Alert Docker로 설치 및 사용하기](https://blog.dalso.org/article/%EC%8B%9C%EB%86%80-%ED%97%A4%EB%86%80%EC%97%90%EC%84%9C-%EB%84%A4%ED%8A%B8%EC%9B%8C%ED%81%AC-%EC%8A%A4%EC%BA%90%EB%84%88-pi-alert-docker%EB%A1%9C-%EC%84%A4%EC%B9%98-%EB%B0%8F-%EC%82%AC%EC%9A%A9) (July 2023)
- ▶ [Pi.Alert auf Synology & Docker by - Jürgen Barth](https://www.youtube.com/watch?v=-ouvA2UNu-A) (March 2023)
- ▶ [Top Docker Container for Home Server Security - VirtualizationHowto](https://www.youtube.com/watch?v=tY-w-enLF6Q) (March 2023)
- ▶ [Pi.Alert or WatchYourLAN can alert you to unknown devices appearing on your WiFi or LAN network - Danie van der Merwe](https://www.youtube.com/watch?v=v6an9QG2xF0) (November 2022)
This functionality allows you to define **custom properties** for devices, which can store and display additional information on the device listing page. By marking properties as "Show", you can enhance the user interface with quick actions, notes, or external links.
### Key Features:
- **Customizable Properties**: Define specific properties for each device.
- **Visibility Control**: Choose which properties are displayed on the device listing page.
- **Interactive Elements**: Include actions like links, modals, and device management directly in the interface.
---
## Defining Custom Properties
Custom properties are structured as a list of objects, where each property includes the following fields:
Visible properties (`CUSTPROP_show: true`) are displayed as interactive icons in the device listing. Each icon can perform one of the following actions based on the `CUSTPROP_type`:
1.**Modals (e.g., Show Notes)**:
- Displays detailed information in a popup modal.
- Example: Firmware version details.
2.**Links**:
- Redirect to an external or internal URL.
- Example: Open a device's documentation or external site.
3.**Device Actions**:
- Manage devices with actions like delete.
- Example: Quickly remove a device from the network.
4.**Plugins**:
- Future placeholder for running custom plugin scripts.
- **Note**: Not implemented yet.
---
## Example Use Cases
1.**Device Documentation Link**:
- Add a custom property with `CUSTPROP_type` set to `link` or `link_new_tab` to allow quick navigation to the external documentation of the device.
2.**Firmware Details**:
- Use `CUSTPROP_type: show_notes` to display firmware versions or upgrade instructions in a modal.
3.**Device Removal**:
- Enable device removal functionality using `CUSTPROP_type: delete_dev`.
---
## Notes
- **Plugin Functionality**: The `run_plugin` action type is currently not implemented and will show an alert if used.
- **Custom Icons (Experimental 🧪)**: Use Base64-encoded HTML to provide custom icons for each property. You can add your icons in Setttings via the `CUSTPROP_icon` settings
- **Visibility Control**: Only properties with `CUSTPROP_show: true` will appear on the listing page.
This feature provides a flexible way to enhance device management and display with interactive elements tailored to your needs.
# A high-level description of the datbase structure
# A high-level description of the database structure
⚠ Disclaimer: As I'm not the original author, some of the information might be inaccurate. Feel free to submit a PR to correct anything within this page or documentation in general.
An overview of the most important database tables as well as an detailed overview of the Devices table. The MAC address is used as a foreign key in most cases.
The MAC address is used as a foreign key in most cases.
| `devMac` | MAC address of the device. | `00:1A:2B:3C:4D:5E` |
| `devName` | Name of the device. | `iPhone 12` |
| `devOwner` | Owner of the device. | `John Doe` |
| `devType` | Type of the device (e.g., phone, laptop, etc.). If set to a network type (e.g., switch), it will become selectable as a Network Parent Node. | `Laptop` |
| `devVendor` | Vendor/manufacturer of the device. | `Apple` |
| `devFavorite` | Whether the device is marked as a favorite. | `1` |
| `devGroup` | Group the device belongs to. | `Home Devices` |
| `devComments` | User comments or notes about the device. | `Used for work purposes` |
| `devFirstConnection` | Timestamp of the device's first connection. | `2025-03-22 12:07:26+11:00` |
| `devLastConnection` | Timestamp of the device's last connection. | `2025-03-22 12:07:26+11:00` |
| `devLastIP` | Last known IP address of the device. | `192.168.1.5` |
| `devStaticIP` | Whether the device has a static IP address. | `0` |
| `devScan` | Whether the device should be scanned. | `1` |
| `devLogEvents` | Whether events related to the device should be logged. | `0` |
| `devAlertEvents` | Whether alerts should be generated for events. | `1` |
| `devAlertDown` | Whether an alert should be sent when the device goes down. | `0` |
| `devSkipRepeated` | Whether to skip repeated alerts for this device. | `1` |
| `devLastNotification` | Timestamp of the last notification sent for this device. | `2025-03-22 12:07:26+11:00` |
| `devPresentLastScan` | Whether the device was present during the last scan. | `1` |
| `devIsNew` | Whether the device is marked as new. | `0` |
| `devLocation` | Physical or logical location of the device. | `Living Room` |
| `devIsArchived` | Whether the device is archived. | `0` |
| `devParentMAC` | MAC address of the parent device (if applicable) to build the [Network Tree](./NETWORK_TREE.md). | `00:1A:2B:3C:4D:5F` |
| `devParentPort` | Port of the parent device to which this device is connected. | `Port 3` |
| `devIcon` | [Icon](./ICONS.md) representing the device. The value is a base64-encoded SVG or Font Awesome HTML tag. | `PHN2ZyB...` |
| `devGUID` | Unique identifier for the device. | `a2f4b5d6-7a8c-9d10-11e1-f12345678901` |
| `devSite` | Site or location where the device is registered. | `Office` |
| `devSSID` | SSID of the Wi-Fi network the device is connected to. | `HomeNetwork` |
| `devSyncHubNode` | The NetAlertX node ID used for synchronization between NetAlertX instances. | `node_1` |
| `devSourcePlugin` | Source plugin that discovered the device. | `ARPSCAN` |
| `devCustomProps` | [Custom properties](./CUSTOM_PROPERTIES.md) related to the device. The value is a base64-encoded JSON object. | `PHN2ZyB...` |
| `devParentRelType` | The type of relationship between the current device and it's parent node. By default, selecting `nic` will hide it from lists. | `nic` |
| `devReqNicsOnline` | If all NICs are required to be online to mark teh current device online. | `0` |
To understand how values of these fields influuence application behavior, such as Notifications or Network topology, see also:
- [Device Management](./DEVICE_MANAGEMENT.md)
- [Network Tree Topology Setup](./NETWORK_TREE.md)
| Events | Used to collect connection/disconnection events. | ![Screen4][screen4] |
| Online_History | Used to display the `Device presence` chart | ![Screen6][screen6] |
| Parameters | Used to pass values between the frontend and backend. | ![Screen7][screen7] |
| Pholus_Scan | Scan results of the Pholus python network penetration script. | ![Screen8][screen8] |
| Plugins_Events | For capturing events exposed by a plugin via the `last_result.log` file. If unique then saved into the `Plugins_Objects` table. Entries are deleted once processed and stored in the `Plugins_History` and/or `Plugins_Objects` tables. | ![Screen10][screen10] |
| Plugins_History | History of all entries from the `Plugins_Events` table | ![Screen11][screen11] |
| Plugins_Language_Strings | Language strings colelcted from the plugin `config.json` files used for string resolution in the frontend. | ![Screen12][screen12] |
| Plugins_Language_Strings | Language strings collected from the plugin `config.json` files used for string resolution in the frontend. | ![Screen12][screen12] |
| Sessions | Used to display sessions in the charts | ![Screen15][screen15] |
| Settings | Database representation of the sum of all settings from `pialert.conf` and plugins coming from `config.json` files. | ![Screen16][screen16] |
| Settings | Database representation of the sum of all settings from `app.conf` and plugins coming from `config.json` files. | ![Screen16][screen16] |
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.