Compare commits

..

403 Commits

Author SHA1 Message Date
jokob-sk
ba3481759b DOCS: Migration callouts
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-29 16:50:06 +11:00
jokob-sk
7125cea29b DOCS: DB + config -> /data
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-29 16:19:13 +11:00
jokob-sk
8586c5a307 FE: delay UI_DEFAULT_PAGE_SIZE setting check after cahce rebuilt #1181
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-29 15:45:28 +11:00
jokob-sk
0d81315809 PLG: PIHOLEAPI FAKE MAC #1282
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-29 14:18:54 +11:00
jokob-sk
8f193f1e2c Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-11-29 13:52:04 +11:00
jokob-sk
b1eef8aa09 PLG: PIHOLEAPI FAKE MAC #1282
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-29 13:51:16 +11:00
Massimo Pissarello
2da17f272c Translated using Weblate (Italian)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (763 of 763 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/it/
2025-11-28 09:00:12 +01:00
jokob-sk
7bcb4586b2 FE: regex validation for cron run schedules
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-27 12:21:12 +11:00
jokob-sk
d3326b3362 FE: weblate
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-27 12:12:21 +11:00
jokob-sk
b9d3f430fe FE: regex validation for cron run schedules
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-27 12:10:33 +11:00
Carlos M. Silva
067336dcc1 Translated using Weblate (Portuguese (Portugal))
Some checks failed
Code checks / docker-tests (push) Has been cancelled
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 68.2% (520 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/pt_PT/
2025-11-26 20:15:22 +01:00
jokob-sk
8acb0a876a DOCS: cleanup
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-26 10:20:19 +11:00
jokob-sk
d1be41eca4 DOCS: cleanup
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-26 10:02:15 +11:00
jokob-sk
00e953a7ce DOCS: cleanup
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-26 09:52:12 +11:00
jokob-sk
b9ef9ad041 DOCS: tmpfs cleanup
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-26 09:25:37 +11:00
jokob-sk
e90fbf17d3 DOCS: Network parent
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-25 08:16:39 +11:00
jokob-sk
139447b253 BE: mylog() better code radability
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-25 07:54:17 +11:00
Jokob @NetAlertX
fa9fc2c8e3 Merge pull request #1304 from adamoutler/hadolint-fixes
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Fix Hadolint Linting Issues Across Dockerfiles
2025-11-24 17:28:50 +11:00
Adam Outler
30071c6848 Merge branch 'main' into hadolint-fixes 2025-11-23 19:25:45 -05:00
Adam Outler
b0bd3c8191 fix hadolint errors 2025-11-24 00:20:42 +00:00
Jokob @NetAlertX
c753da9e15 Merge pull request #1303 from adamoutler/shell-check-fixes
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
ShellCheck Lint: Fix All Reported Issues in Service Scripts
2025-11-24 10:24:54 +11:00
Adam Outler
4770ee5942 undo previous change for unwritable 2025-11-23 23:19:12 +00:00
Adam Outler
5cd53bc8f9 Storage permission fix 2025-11-23 22:58:45 +00:00
Adam Outler
5e47ccc9ef Shell Check fixes 2025-11-23 22:13:01 +00:00
Jokob @NetAlertX
f5d7c0f9a0 Merge pull request #1302 from adamoutler/supercronic
Replace crond with Supercronic, improve cron logging & backend restart behavior
2025-11-24 07:29:50 +11:00
Adam Outler
35b7e80be4 Remove additional "tests" from instructions. 2025-11-23 16:43:28 +00:00
Adam Outler
07eeac0a0b remove redefined variable 2025-11-23 16:38:03 +00:00
Adam Outler
240d86bf1e docker tests 2025-11-23 16:31:04 +00:00
Adam Outler
274fd50a92 Adjust healthchecks and fix docker test scripts 2025-11-23 15:56:42 +00:00
Adam Outler
bbf49c3686 Don't kill container on backend restart commanded 2025-11-23 01:27:51 +00:00
Adam Outler
e3458630ba Convert from crond to supercronic 2025-11-23 01:14:21 +00:00
Jokob @NetAlertX
2f6f1e49e9 Merge pull request #1300 from jokob-sk/linting-fixes
Some checks failed
Code checks / docker-tests (push) Has been cancelled
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
BE: linting fixes
2025-11-22 21:55:54 +11:00
Jokob @NetAlertX
4f5a40ffce lint and test fixes 2025-11-22 10:52:12 +00:00
jokob-sk
f5aea55b29 BE: linting fixes 5
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 21:30:12 +11:00
jokob-sk
e3e7e2f52e BE: linting fixes 4
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 21:20:46 +11:00
jokob-sk
872ac1ce0f BE: linting fixes 3
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 21:06:03 +11:00
jokob-sk
ebeb7a07af BE: linting fixes 2
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 20:43:36 +11:00
jokob-sk
5c14b34a8b BE: linting fixes
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 13:14:06 +11:00
jokob-sk
f0abd500d9 BE: test fixes
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-21 05:54:19 +11:00
jokob-sk
8503cb86f1 BE: test fixes
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-21 05:43:30 +11:00
jokob-sk
5f0b670a82 LNG: weblate add Japanese
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-21 05:28:43 +11:00
jokob-sk
9df814e351 BE: github action better code check
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-20 17:47:25 +11:00
jokob-sk
88509ce8c2 PLG: NMAPDEV better FAKE_MAC description
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-20 17:47:00 +11:00
Jokob @NetAlertX
995c371f48 Merge pull request #1299 from adamoutler/main
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
feat: docker-based testing
2025-11-20 17:40:10 +11:00
Adam Outler
aee5e04b9f fix(ci): Correct quoting in code_checks workflow (again) 2025-11-20 05:01:08 +01:00
Adam Outler
e0c96052bb fix(ci): Correct quoting in code_checks workflow 2025-11-20 04:37:35 +01:00
Adam Outler
fd5235dd0a CI Checks
Uses the new run_docker_tests.sh script which is self-contained and handles all dependencies and test execution within a Docker container. This ensures that the CI environment is consistent with the local devcontainer environment.

Fixes an issue where the job name 'test' was considered invalid. Renamed to 'docker-tests'.
Ensures that tests marked as 'feature_complete' are also excluded from the test run.
2025-11-20 04:34:59 +01:00
Adam Outler
f3de66a287 feat: Add run_docker_tests.sh for CI/CD and local testing
Introduces a comprehensive script to build, run, and test NetAlertX within a Dockerized devcontainer environment, replicating the setup defined in . This script ensures consistency for CI/CD pipelines and local development.

The script addresses several environmental challenges:
- Properly builds the  Docker image.
- Starts the container with necessary capabilities and host-gateway.
- Installs Python test dependencies (, , ) into the virtual environment.
- Executes the  script to initialize services.
- Implements a healthcheck loop to wait for services to become fully operational before running tests.
- Configures  to use a writable cache directory () to avoid permission issues.
- Includes a workaround to insert a dummy 'internet' device into the database, resolving a flakiness in  caused by its reliance on unpredictable database state without altering any project code.

This script ensures a green test suite, making it suitable for automated testing in environments like GitHub Actions.
2025-11-20 04:19:30 +01:00
Jokob @NetAlertX
9a4fb35ea5 Refine labels and descriptions in issue template
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / test (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Updated labels and descriptions for issue template fields to improve clarity and formatting.
2025-11-18 13:59:34 +11:00
Jokob @NetAlertX
a1ad904042 Enhance issue template with Docker logs instructions
Added instructions for pasting Docker logs in the issue template.
2025-11-18 13:54:59 +11:00
Jokob @NetAlertX
81ff1da756 Merge pull request #1289 from adamoutler/more-code-checks
Improve CI code checks (URL path, Python syntax, linting, tests)
2025-11-18 13:43:07 +11:00
Jokob @NetAlertX
85c9b0b99b Merge pull request #1296 from adamoutler/patch-6
Update Docker Compose documentation for volume usage
2025-11-18 13:42:27 +11:00
Adam Outler
4ccac66a73 Update Docker Compose documentation for volume usage
Clarified the preferred volume layout for NetAlertX and explained the bind mount alternative.
2025-11-17 18:31:37 -05:00
Jokob @NetAlertX
c7b9fdaff2 Merge pull request #1291 from adamoutler/test-fixes
Test fixes
2025-11-18 09:47:21 +11:00
Jokob @NetAlertX
c7dcc20a1d Merge pull request #1295 from adamoutler/main
Add VERSION file creation
2025-11-18 09:46:39 +11:00
Adam Outler
bb365a5e81 UID 20212 for read only before definition. 2025-11-17 20:57:18 +00:00
Adam Outler
e2633d0251 Update from docker v3 to v6 2025-11-17 20:54:18 +00:00
Adam Outler
09c40e76b2 No git in Dockerfile generation. 2025-11-17 20:47:11 +00:00
Adam Outler
abc3e71440 Remove redundant chown; read only version. 2025-11-17 20:45:52 +00:00
Adam Outler
d13596c35c Coderabbit suggestion 2025-11-17 20:27:27 +00:00
Adam Outler
7d5dcf061c Add VERSION file creation 2025-11-17 15:18:41 -05:00
Adam Outler
6206e483a9 Remove files that shouldn't be in PR: db.php, cron files 2025-11-17 02:57:42 +00:00
Adam Outler
f1ecc61de3 Tests Passing 2025-11-17 02:45:42 +00:00
Jokob @NetAlertX
92a6a3a916 Merge pull request #1290 from adamoutler/get-version-out-of-commits
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Add .VERSION to gitignore
2025-11-17 13:35:24 +11:00
Adam Outler
8a89f3b340 Remove VERSION file from repo and generate dynamic 2025-11-17 02:18:00 +00:00
Adam Outler
a93e87493f Update Python setup action to version 5 2025-11-16 20:33:53 -05:00
Adam Outler
c7032bceba Upgrade Python setup action and dependencies
Updated Python setup action to version 5 and specified Python version 3.11. Also modified dependencies installation to include pyyaml.
2025-11-16 20:32:08 -05:00
Adam Outler
0cd7528284 Fix cron restart 2025-11-17 00:20:08 +00:00
Adam Outler
2309b8eb3f Add linting and testing steps to workflow 2025-11-16 18:58:20 -05:00
jokob-sk
dbd1bdabc2 PLG: NMAP make param handling more robust #1288
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-16 10:16:23 +11:00
jokob-sk
093d595fc5 DOCS: path cleanup, TZ removal
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-16 09:26:18 +11:00
jokob-sk
c38758d61a PLG: PIHOLEAPI skipping invalid macs #1282
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-15 13:48:18 +11:00
jokob-sk
6034b12af6 FE: better isBase64 check
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-15 13:36:50 +11:00
jokob-sk
972654dc78 PLG: PIHOLEAPI #1282
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-15 13:36:22 +11:00
jokob-sk
ec417b0dac BE: REMOVAL dev workflow
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-14 22:33:42 +11:00
jokob-sk
2e9352dc12 BE: dev workflow
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-14 22:29:32 +11:00
Jokob @NetAlertX
566b263d0a Run Unit tests in GitHub workflows 2025-11-14 11:22:58 +00:00
Jokob @NetAlertX
61b42b4fea BE: Fixed or removed failing tests - can be re-added later 2025-11-14 11:18:56 +00:00
Jokob @NetAlertX
a45de018fb BE: Test fixes 2025-11-14 10:46:35 +00:00
Jokob @NetAlertX
bfe6987867 BE: before_name_updates change #1251 2025-11-14 10:07:47 +00:00
jokob-sk
b6567ab5fc BE: NEWDEV setting to disable IP match for names
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-13 20:22:34 +11:00
jokob-sk
f71c2fbe94 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-11-13 18:29:22 +11:00
Jokob @NetAlertX
aeb03f50ba Merge pull request #1287 from adamoutler/main
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Add missing .VERSION file
2025-11-13 13:26:49 +11:00
Adam Outler
734db423ee Add missing .VERSION file 2025-11-13 00:35:06 +00:00
Jokob @NetAlertX
4f47dbfe14 Merge pull request #1286 from adamoutler/port-fixes
Fix: Fix for ports
2025-11-13 08:23:46 +11:00
Adam Outler
d23bf45310 Merge branch 'jokob-sk:main' into port-fixes 2025-11-12 15:02:36 -05:00
Adam Outler
9c366881f1 Fix for ports 2025-11-12 12:02:31 +00:00
jokob-sk
9dd482618b DOCS: MTSCAN - mikrotik missing from docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-12 21:07:51 +11:00
HAMAD ABDULLA
84cc01566d Translated using Weblate (Arabic)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 88.0% (671 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ar/
2025-11-11 20:51:21 +00:00
jokob-sk
ac7b912b45 BE: link to server in reports #1267, new /tmp/api path for SYNC plugin
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 23:33:57 +11:00
jokob-sk
62852f1b2f BE: link to server in reports #1267
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 23:18:20 +11:00
jokob-sk
b659a0f06d BE: link to server in reports #1267
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 23:09:28 +11:00
jokob-sk
fb3620a378 BE: Better upgrade message formating
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 22:31:58 +11:00
jokob-sk
9d56e13818 FE: handling devName as number in network map #1281
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 08:16:36 +11:00
jokob-sk
43c5a11271 BE: dev workflow
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 07:53:19 +11:00
Jokob @NetAlertX
ac957ce599 Merge pull request #1271 from jokob-sk/next_release
Next release
2025-11-11 07:43:09 +11:00
jokob-sk
3567906fcd DOCS: migration docs
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 15:43:03 +11:00
jokob-sk
be6801d98f DOCS: migration docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 15:41:28 +11:00
jokob-sk
bb9b242d0a BE: fixing imports
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 13:20:11 +11:00
jokob-sk
5f27d3b9aa BE: fixing imports
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 12:47:21 +11:00
jokob-sk
93af0e9d19 BE: fixing imports
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 12:45:06 +11:00
Jokob @NetAlertX
398e2a896f Merge pull request #1280 from jokob-sk/pr-1279
Pr 1279
2025-11-10 10:15:46 +11:00
jokob-sk
a98bac331d MERGE: resolve conflicts
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 10:11:34 +11:00
jokob-sk
9f6086e5cf BE: better error message
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 09:27:13 +11:00
Adam Outler
c5a1f19567 Attempt to kick off coderabbit
Removed unnecessary blank lines in the nginx configuration template.
2025-11-09 16:56:47 -05:00
jokob-sk
6d70a8a71d BE: /logs endpoint, comments resolution, github template
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 07:58:21 +11:00
Adam Outler
4161261c43 Remove unused files 2025-11-09 17:38:31 +00:00
Adam Outler
179821a527 fix workspace 2025-11-09 17:34:31 +00:00
Adam Outler
2028b1a6e3 Merge remote-tracking branch 'origin/main' into data_and_tmp_standardization 2025-11-09 17:14:11 +00:00
Adam Outler
5b871865db /data and /tmp standarization 2025-11-09 17:03:25 +00:00
Ettore Atalan
76bcec335d Translated using Weblate (German)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 81.4% (621 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/de/
2025-11-09 15:51:16 +00:00
jokob-sk
8483a741b4 BE: LangStrings /graphql + /logs endpoint, utils chores
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-09 18:50:16 +11:00
jokob-sk
68c8e16828 PLG: cleanup
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-08 22:08:20 +11:00
jokob-sk
76150b2ca7 BE: github actions + dev version
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-08 22:02:55 +11:00
jokob-sk
5cf8a25bae BE: timestamp work name changes #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-08 22:01:04 +11:00
Jokob @NetAlertX
593aa16f17 Merge pull request #1278 from alexhk/patch-1
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Fix typo in Baseline Docker Compose - DOCKER_COMPOSE.md
2025-11-08 21:03:17 +11:00
alexhk
af9793c2ed Update DOCKER_COMPOSE.md
Assuming this was a typo
2025-11-08 09:12:21 +01:00
jokob-sk
552d2a8286 DOCS: plugin docs
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-08 14:16:17 +11:00
jokob-sk
7822b11d51 BE: plugins changed data detection
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-08 14:15:45 +11:00
jokob-sk
cbe5a4a732 BE: version added to app_state
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 22:08:19 +11:00
jokob-sk
58de31d0ea BE: prod workflow + docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 21:35:05 +11:00
jokob-sk
5c06dc68c6 DOCS: link fix
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 21:20:28 +11:00
jokob-sk
44d65cca96 BE: version file
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 21:12:13 +11:00
Pavel Borecki
71e0d13bef Translated using Weblate (Czech)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 8.2% (63 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/cs/
2025-11-06 10:51:14 +01:00
jokob-sk
30269a6a73 DOCS: link fix
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 20:47:54 +11:00
jokob-sk
6374219e05 BE: github actions + dev version
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 20:47:28 +11:00
jokob-sk
6e745fc6d1 DOCS: fix
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 08:14:13 +11:00
jokob-sk
85aa04c490 TEST: fix
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-06 08:14:00 +11:00
jokob-sk
1fd8d97d56 BE: chore datetime_utils
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-05 16:42:42 +11:00
jokob-sk
286d5555d2 BE: chore datetime_utils
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-05 16:14:03 +11:00
jokob-sk
57096a9258 FE: handling non-existent logs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-05 16:13:28 +11:00
jokob-sk
c08eb1dbba BE: chore datetime_utils
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-05 16:08:04 +11:00
jokob-sk
746f1a8922 DOCS: decription fix and --exclude-broadcast documentation
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-05 15:26:57 +11:00
jokob-sk
0845b7f445 BE: name resolution did not apply regex cleanup
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-05 15:25:53 +11:00
Blueberry
a6fffe06b7 Translated using Weblate (Russian)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ru/
2025-11-04 21:51:12 +00:00
jokob-sk
ea8cea16c5 TEST: cleanup
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-04 20:01:27 +11:00
jokob-sk
5452b7287b BE/PLG: TZ timestamp work #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-04 19:52:19 +11:00
jokob-sk
80d7ef7f24 BE/PLG: TZ timestamp work #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-04 19:46:50 +11:00
jokob-sk
dc4da5b4c9 BE/PLG: TZ timestamp work #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-04 19:44:30 +11:00
jokob-sk
59477e7b38 BE/PLG: TZ timestamp work #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-04 19:24:13 +11:00
Jokob @NetAlertX
6dd7251c84 BE/PLG: TZ timestamp work #1251
Some checks failed
docker / docker_dev (push) Has been cancelled
2025-11-04 07:06:19 +00:00
jokob-sk
c52e44f90c BE/PLG: TZ timestamp work #1251
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-04 08:10:50 +11:00
Jokob @NetAlertX
db18ca76b4 Merge pull request #1272 from jokob-sk/main
Some checks failed
docker / docker_dev (push) Has been cancelled
sync
2025-11-03 10:22:35 +11:00
jokob-sk
288427c939 BE/PLG: TZ timestamp work #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-03 10:19:39 +11:00
jokob-sk
90a07c61eb Merge branch 'main' of https://github.com/jokob-sk/NetAlertX
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
2025-11-03 08:14:26 +11:00
jokob-sk
13341e35c9 PLG: ARPSCAN prevent duplicates across subnets
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-03 08:14:15 +11:00
jokob-sk
4c92a941a8 BE: TZ timestamp work #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-03 08:12:00 +11:00
Jokob @NetAlertX
4cec88aaad Merge pull request #1269 from jokob-sk/main
Some checks failed
docker / docker_dev (push) Has been cancelled
sync
2025-11-02 22:21:19 +11:00
Jokob @NetAlertX
031d810566 Merge branch next_release into main
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
2025-11-02 22:20:13 +11:00
jokob-sk
b806f84946 BE: invlaid return #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-02 22:16:28 +11:00
jokob-sk
7c90c2e93c BE: spinner + timestamp work #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-02 22:12:30 +11:00
Jokob @NetAlertX
cb69990734 Merge pull request #1268 from adamoutler/synology-fix
Fix permissions on Synology
2025-11-02 21:48:27 +11:00
Adam Outler
7037cf1bc6 fxi permissions on synology inherited 2025-11-02 10:26:21 +00:00
jokob-sk
a27ee5c2f2 BE: changes #1251
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-02 13:55:51 +11:00
jokob-sk
c3c570ef5f BE: added stateUpdated #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-02 13:51:17 +11:00
Jokob @NetAlertX
71646e1645 Merge pull request #1263 from adamoutler/FEAT--Make-Errors-More-Helpful
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Feat: make errors more helpful
2025-11-02 13:49:39 +11:00
jokob-sk
2215272e78 BE: short-circuit of name resolution #1251
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-02 11:57:08 +11:00
Adam Outler
dde542c484 make /services/scripts executable by default 2025-11-02 00:12:50 +00:00
Adam Outler
23a0fac973 Address Coderabbit issue 2025-11-01 23:54:54 +00:00
jokob-sk
2fdeccebe1 PLG: NMAPDEV stripping --vlan #1264
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-02 09:07:59 +11:00
Adam Outler
db5381db14 Update test/docker_tests/test_docker_compose_scenarios.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-11-01 15:12:54 -04:00
Adam Outler
f1fbc47508 coderabbit required fix 2025-11-01 19:04:31 +00:00
Adam Outler
2a9d352322 Update test/docker_tests/configurations/test_all_docker_composes.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-11-01 14:57:57 -04:00
Adam Outler
51aa3d4a2e coderabbit 2025-11-01 18:53:07 +00:00
Adam Outler
70373b1fbd Address coderabbit-discoverd issues 2025-11-01 18:18:32 +00:00
jokob-sk
e7ed9e0896 BE: logging fix and comments why eve_PendingAlertEmail not cleared
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-01 17:58:22 +11:00
Adam Outler
79887f0bd7 Merge branch 'jokob-sk:main' into FEAT--Make-Errors-More-Helpful 2025-10-31 23:59:45 -04:00
Adam Outler
a6bc96d2dd Corrections on testing and behaviors 2025-11-01 03:57:52 +00:00
Adam Outler
8edef9e852 All errors have documentation links 2025-10-31 22:24:31 +00:00
Adam Outler
1e63cec37c Revise tests. Use docker-compose.yml where possible 2025-10-31 22:24:08 +00:00
jokob-sk
ff96d38339 DOCS:old docker installation guide
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-31 22:10:02 +11:00
jokob-sk
537be0f848 BE: typos
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-31 22:01:16 +11:00
Hosted Weblate
b89917ca3e Merge branch 'origin/main' into Weblate. 2025-10-31 11:55:36 +01:00
jokob-sk
daea3a2cd7 DOCS: WARNING use dockerhub docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-31 21:55:15 +11:00
jokob-sk
b86f636b12 Revert "DOCS: clearer local_path instructions"
This reverts commit dfc64fd85f.
2025-10-31 21:46:59 +11:00
jokob-sk
0b08995223 Revert "DOCS: install refactor work"
This reverts commit fe69972caa.
2025-10-31 21:46:25 +11:00
Hosted Weblate
f42186b616 Merge branch 'origin/main' into Weblate. 2025-10-31 11:10:55 +01:00
jeet moh
bc9fb6bcde Translated using Weblate (Persian (fa_FA))
Currently translated at 0.1% (1 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/fa_FA/
2025-10-31 11:06:33 +01:00
jokob-sk
88f889f03e Merge branch 'next_release' of https://github.com/jokob-sk/NetAlertX into next_release 2025-10-31 20:56:36 +11:00
jokob-sk
533c99eb61 LNG: Swedish (sv_sv) 2025-10-31 20:55:59 +11:00
jokob-sk
afa257f245 Merge branch 'next_release' of https://github.com/jokob-sk/NetAlertX into next_release 2025-10-31 20:45:31 +11:00
jokob-sk
78ab0fbd2d PLG: SNMPDSC typo 2025-10-31 20:45:09 +11:00
jokob-sk
64e4586be6 PLG: Encode SMTP_PASS using base64 #1253 2025-10-31 20:26:54 +11:00
jokob-sk
2f7d9a02ae PLG: snmpwalk -OXsq clarification #1231
Some checks failed
docker / docker_dev (push) Has been cancelled
2025-10-31 15:02:51 +11:00
Adam Outler
d29700acf8 New mount test structure. 2025-10-31 00:07:34 +00:00
jokob-sk
75072dad5f GIT: build dev container from next_release branch
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-31 08:16:54 +11:00
Jokob @NetAlertX
19b1fc960c Merge pull request #1260 from jokob-sk/main
BE: Devices Tiles SQL syntax error  #1238
2025-10-31 08:15:12 +11:00
jokob-sk
63d6410bb4 BE: handle missing buildtimestamp.txt
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-31 08:12:38 +11:00
Adam Outler
b89a44d0ec Improve startup checks 2025-10-30 21:05:24 +00:00
Jokob @NetAlertX
929eb1626b BE: Devices Tiles SQL syntax error #1238
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
2025-10-30 20:48:38 +00:00
Adam Outler
8cb1836777 Move all check- scripts to /entrypoint.d/ for better organization 2025-10-30 20:18:08 +00:00
jokob-sk
512dedff4e FE: increase filter debounce to 750ms #1254
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-31 06:39:55 +11:00
jokob-sk
2a2782b4c7 Merge branch 'next_release' of https://github.com/jokob-sk/NetAlertX into next_release 2025-10-30 14:52:34 +11:00
Jokob @NetAlertX
b726518f87 Merge pull request #1258 from jokob-sk/main
BE: fix GRAPHQL_PORT
2025-10-30 14:52:19 +11:00
jokob-sk
274becab97 BE: fix GRAPHQL_PORT
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-30 14:51:24 +11:00
jokob-sk
869f28b036 DOCS: typos
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-30 14:50:13 +11:00
jokob-sk
f81a1b93f9 DOCS: Docker guides
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-30 14:31:22 +11:00
Jokob @NetAlertX
58fe531393 Merge pull request #1257 from jokob-sk/main
BE: Remove GraphQL check from healthcheck
2025-10-30 13:56:17 +11:00
jokob-sk
8da136f192 BE: Remove GraphQL check from healthcheck
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-30 13:55:05 +11:00
jokob-sk
50f9277e5e DOCS: Docker guides (GRAPHQL_PORT fix)
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-30 13:30:23 +11:00
Jokob @NetAlertX
7ca9d2a6c5 Merge pull request #1256 from adamoutler/next_release
update docker compose
2025-10-30 13:16:05 +11:00
jokob-sk
b76272bbdc Merge branch 'next_release' of https://github.com/jokob-sk/NetAlertX into next_release 2025-10-30 13:14:12 +11:00
jokob-sk
fba5359839 DOCS: Docker guides
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-30 13:14:06 +11:00
Adam Outler
55171e06b6 update compose 2025-10-29 23:29:32 +00:00
Jokob @NetAlertX
22aa995fc5 Merge pull request #1255 from Tweebloesem/patch-2
Fix typo in PiHole integration guide
2025-10-30 08:33:06 +11:00
Tweebloesem
af80cff8e0 Fix typo in PiHole integration guide 2025-10-29 22:18:42 +01:00
jokob-sk
647defb4cc Merge branch 'next_release' of https://github.com/jokob-sk/NetAlertX into next_release 2025-10-29 20:33:42 +11:00
jokob-sk
2148a7ffc5 DOCS: Docker guides
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-29 20:33:32 +11:00
Jokob @NetAlertX
ea5e2361da Merge pull request #1249 from jokob-sk/main
Sync
2025-10-29 19:26:36 +11:00
Jokob @NetAlertX
0079ece1e2 Merge pull request #1248 from adamoutler/Easy-Permissions
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Easy permissions
2025-10-29 19:25:32 +11:00
jokob-sk
61de63771b DOCS: Docker guides
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-29 15:51:31 +11:00
jokob-sk
57f3d6f7ab DOCS: Security features - fix hierarchy
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-29 13:26:10 +11:00
jokob-sk
2e76ff1df7 DOCS: Migration and Security features navigation link 2025-10-29 13:21:12 +11:00
Adam Outler
8d4c7ea074 less invasive permission changes 2025-10-29 00:32:08 +00:00
Adam Outler
b4027b6eee docker-compose needed for fast container rebuilds 2025-10-29 00:08:32 +00:00
Adam Outler
b36b3be176 Fix permissions messages and test parms 2025-10-29 00:08:09 +00:00
Adam Outler
7ddb7d293e new method of fixing permissions 2025-10-28 23:58:02 +00:00
Jokob @NetAlertX
40341a856f Merge pull request #1247 from adamoutler/next_release
Security features overview
2025-10-29 07:37:55 +11:00
jokob-sk
304d4d0837 Merge branch 'next_release' of https://github.com/jokob-sk/NetAlertX into next_release 2025-10-29 07:33:59 +11:00
jokob-sk
a353acff2d DOCS: builds
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-29 07:32:56 +11:00
Adam Outler
6afa52e604 Security features overview 2025-10-28 00:15:12 +00:00
Jokob @NetAlertX
5962312afd Merge pull request #1235 from adamoutler/hardening-fixes
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Hardening fixes
2025-10-28 08:31:30 +11:00
Adam Outler
3ba410053e Update install/production-filesystem/entrypoint.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-27 16:51:17 -04:00
Adam Outler
a6ac492d76 Add APP_CONF_OVERRIDE support 2025-10-27 20:19:17 +00:00
Jokob @NetAlertX
4d148f35ce DOCS: wording 2025-10-27 03:33:50 +00:00
jokob-sk
9b0f45b88b DOCS: migration prep
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-27 14:21:17 +11:00
jokob-sk
84183f09ad LANG: ru_ru updates
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-27 12:58:48 +11:00
Jokob @NetAlertX
5dba0f1ca1 Merge pull request #1244 from jokob-sk/main
sync
2025-10-27 08:14:16 +11:00
Adam Outler
095372a22b Rename GRAPHQL_PORT to APP_CONF_OVERRIDE 2025-10-26 16:49:28 -04:00
Adam Outler
d8c2dc0563 Apply coderabit's latest hare-brained idea 2025-10-26 19:58:57 +00:00
Adam Outler
cfffaf4503 Strengthen tests 2025-10-26 19:40:17 +00:00
Adam Outler
01b64cce66 Changes requested by coderabbit. 2025-10-26 19:34:28 +00:00
Adam Outler
63c4b0d7c2 Update .devcontainer/devcontainer.json
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-26 14:15:12 -04:00
Adam Outler
5ec35aa50e Build the netalertx-test image on start so tests don't fail 2025-10-26 18:12:02 +00:00
Adam Outler
ededd39d5b Coderabbit fixes 2025-10-26 17:53:46 +00:00
Adam Outler
15bc1635c2 Update install/production-filesystem/services/scripts/check-root.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-26 12:45:42 -04:00
Adam Outler
74a67e3b38 Added clarifying examples to dockerfile 2025-10-26 16:10:17 +00:00
Adam Outler
52b747be0b Remove warnings in devcontainer 2025-10-26 15:54:01 +00:00
Adam Outler
d2c28f6a28 Changes for tests identified by CodeRabbit 2025-10-26 15:30:03 +00:00
Almaz
816b9076ae Translated using Weblate (Russian)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ru/
2025-10-26 08:02:42 +00:00
Adam Outler
fb02774814 Fix errors for tests 2025-10-26 00:14:03 +00:00
jokob-sk
26632277d4 PLUG: SNMPDSC timeout multiplier #1231
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-26 11:07:34 +11:00
jokob-sk
dfc64fd85f DOCS: clearer local_path instructions
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-26 10:59:42 +11:00
jokob-sk
b44369a493 PLUG: 0 in device tiles #1238
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-26 10:59:05 +11:00
jokob-sk
8ada2c36f9 BE: 0 in device tiles #1238
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-26 10:58:34 +11:00
Adam Outler
c4a041e6e1 Coderabit changes 2025-10-25 17:58:21 +00:00
jokob-sk
170aeb041f PLUG: SNMPDSC timeout not respected #1231
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-25 13:48:56 +11:00
jokob-sk
fe69972caa DOCS: install refactor work
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-25 09:28:03 +11:00
Adam Outler
32f9111f66 Restore test_safe_builder_unit.py to upstream version (remove local changes) 2025-10-24 20:32:50 +00:00
Jokob @NetAlertX
bb35417213 Merge pull request #1237 from JVKeller/patch-3
Change branch back to main.
2025-10-25 07:07:12 +11:00
Jokob @NetAlertX
fe69bc4afd Merge pull request #1236 from AlmazzikDev/patch-1
Rename CONTRIBUTING to CONTRIBUTING.md
2025-10-25 07:06:41 +11:00
rell3k
05890b3ddf Change branch back to main.
Forgot to change git clone branch back to main.
2025-10-24 09:24:01 -04:00
Almaz
c27886521a Rename CONTRIBUTING to CONTRIBUTING.md 2025-10-24 15:35:18 +03:00
Adam Outler
7f74c2d6f3 docker compose changes 2025-10-23 21:37:11 -04:00
Adam Outler
5a63b7243b Merge main into hardening-fixes 2025-10-23 21:19:30 -04:00
Adam Outler
0897c05200 Tidy up output 2025-10-23 21:16:15 -04:00
Adam Outler
7a3bf6716c Remove code coverage from repository 2025-10-23 20:46:39 -04:00
Adam Outler
edd5bd27b0 Devcontainer setup 2025-10-23 23:33:04 +00:00
Adam Outler
3b7830b922 Add unit tests and updated messages 2025-10-23 21:15:15 +00:00
Adam Outler
356cacab2b Don't increment sqlite sequence 2025-10-23 21:15:02 +00:00
Adam Outler
d12ffb31ec Update readme with simple build instructions 2025-10-23 21:04:15 +00:00
Adam Outler
f70d3f3b76 Limiter fix for older kernels 2025-10-23 20:36:04 +00:00
Adam Outler
27899469af use system speedtest, not un-updated & removed script 2025-10-23 08:36:42 +00:00
Adam Outler
59c7d7b415 Add test dependencies 2025-10-23 00:27:16 +00:00
Adam Outler
0851680ef6 Add additional startup checks 2025-10-22 23:51:36 +00:00
Adam Outler
1af19fe9fd Only nginx/python errors in docker logs. no stdout from backend. 2025-10-22 23:51:15 +00:00
Adam Outler
ce8bb53bc8 Refine devcontainer setup and docker tests 2025-10-22 19:48:58 -04:00
Adam Outler
5636a159b8 Add check permissions script 2025-10-22 00:02:03 +00:00
jokob-sk
6a20128960 BE: install refactor work
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-22 07:48:50 +11:00
Adam Outler
05f083730b Fix missing storage check 2025-10-21 19:18:59 +00:00
Adam Outler
3441f77a78 Fix always fresh install env 2025-10-21 19:10:48 +00:00
Adam Outler
d6bcb27c42 Missing devcontainer build timestamp 2025-10-21 19:05:47 +00:00
Jokob @NetAlertX
5d7af88130 Merge pull request #1230 from adamoutler/hardening
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Feat: Enterprise-Grade Security Hardening and Build Overhaul
2025-10-21 12:35:08 +11:00
Adam Outler
6f2e556112 Remove duplicate file replacement logic in update_vendors.sh
Dang it coderabbit. We expect more of your diffs.
2025-10-19 12:18:16 -04:00
Adam Outler
ea4c70ee7f Update install/production-filesystem/services/scripts/check-first-run-config.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-19 12:15:55 -04:00
Adam Outler
5ed46da1dc Set caps on actual python3.12 2025-10-19 15:55:28 +00:00
Adam Outler
628f35c15d Remove unused pythonpathpath variable 2025-10-19 15:41:57 +00:00
Adam Outler
066fecfd88 add caps to python instead of scapy. 2025-10-19 15:39:54 +00:00
Adam Outler
660f0c2c48 Update install/production-filesystem/services/scripts/update_vendors.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-19 11:37:04 -04:00
Adam Outler
999feb27f9 Update install/production-filesystem/services/scripts/update_vendors.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-19 11:36:09 -04:00
Adam Outler
86bf0a3672 Update install/production-filesystem/services/scripts/check-first-run-config.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-19 11:35:27 -04:00
Adam Outler
8eab7eeae9 Update .devcontainer/scripts/setup.sh
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-10-19 11:33:07 -04:00
Adam Outler
84f1283cd0 Add novel coderabit no-write database creation 2025-10-19 15:27:55 +00:00
Adam Outler
dcf250d36f Coderabbit nitpicks. 2025-10-19 15:12:27 +00:00
Adam Outler
131c0c0f4b Fix fish terminal. Smarter code completion and other nicities. 2025-10-19 14:28:09 +00:00
Adam Outler
a58b3e35b9 Coderabbit suggestions 2025-10-19 14:18:07 +00:00
Adam Outler
14be7a2bcc Missing Slash 2025-10-19 02:45:19 +00:00
Adam Outler
9b3ddda381 Fix persistent environment issues 2025-10-19 02:35:57 +00:00
Adam Outler
1f46f204bc Generate devcontainer configs 2025-10-19 01:06:42 +00:00
Adam Outler
80c1459442 Final touches on devcontainer 2025-10-19 00:39:26 +00:00
Adam Outler
62536e4bfb Coderabit suggestions 2025-10-18 14:07:27 -04:00
Adam Outler
028335c1a9 Coderabit suggestions 2025-10-18 13:45:48 -04:00
Adam Outler
7483e46dce Merge remote-tracking branch 'origin/main' into hardening 2025-10-18 13:23:57 -04:00
Adam Outler
c1b573f1db Add some todos 2025-10-18 13:16:35 -04:00
Adam Outler
d11c9d7c4a Improve warnings. 2025-10-17 16:36:48 -04:00
jokob-sk
b916542584 BE: DB generate=ing script
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-17 21:33:43 +11:00
jokob-sk
6da3cfdcb9 FE: docs mikrotik
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-17 21:33:22 +11:00
jokob-sk
d38e77f801 docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-17 21:32:53 +11:00
jokob-sk
18eaee4906 FE: lang
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-17 21:32:22 +11:00
Safeguard
59e7463832 Translated using Weblate (Russian)
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ru/
2025-10-17 09:07:42 +00:00
Adam Outler
dc444117b6 Improve mount permissions 2025-10-16 21:49:54 -04:00
Adam Outler
a3dae0817a Fix debian docker start 2025-10-16 19:51:57 -04:00
Adam Outler
e733f8a089 Relay failed status to docker. 2025-10-16 16:17:37 -04:00
Jokob @NetAlertX
ad0ddda943 Merge pull request #1229 from adamoutler/patch-5
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Add script to regenerate the database from schema
2025-10-16 12:50:08 +11:00
Adam Outler
28e0e4aab4 Fix database regeneration script to use correct file 2025-10-15 20:53:03 -04:00
Adam Outler
324cde9c4a Add script to regenerate the database from schema
This script recreates the database from schema code and imports the schema into the new database file.
2025-10-15 20:50:42 -04:00
Adam Outler
f57ec74cc1 Minor alterations to ddevcontainer. 2025-10-16 00:09:07 +00:00
Adam Outler
de92c9563e break apart services, fix startup 2025-10-15 18:18:30 -04:00
anton garcias
3686a4a07e Translated using Weblate (Catalan)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ca/
2025-10-13 21:07:26 +00:00
Ettore Atalan
44ba9455b6 Translated using Weblate (German)
Currently translated at 81.3% (620 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/de/
2025-10-13 21:07:25 +00:00
Adam Outler
5109a0881d Additional hardening 2025-10-12 21:00:27 -04:00
Adam Outler
1be91559d2 Set container parameters 2025-10-12 15:05:20 -04:00
R
3bf6ce698a Translated using Weblate (Chinese (Simplified Han script))
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/zh_Hans/
2025-10-12 15:52:14 +02:00
Massimo Pissarello
1532256bac Translated using Weblate (Italian)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/it/
2025-10-11 13:25:29 +02:00
Максим Горпиніч
a8b62dee03 Translated using Weblate (Ukrainian)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/uk/
2025-10-10 12:04:36 +02:00
Sylvain Pichon
fe434b41ae Translated using Weblate (French)
Currently translated at 100.0% (762 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/fr/
2025-10-10 12:04:35 +02:00
jokob-sk
e4d3a50391 FE: API in-app messaging endpoint
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 17:01:14 +11:00
jokob-sk
b59bca2967 BE: API in-app messaging endpoint
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 17:00:53 +11:00
jokob-sk
8ae0367e8e FE: Cleanup
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 15:45:14 +11:00
jokob-sk
0cb038d1c1 BE: UNIFIAPI handle missing id #1224
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 14:37:26 +11:00
jokob-sk
fe018fb3c3 FE: prevent error on no devices selected #1219
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 14:28:08 +11:00
jokob-sk
161723ae35 merge_translations fix
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 14:27:21 +11:00
jokob-sk
6b3f02fcc6 weblate
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 14:26:52 +11:00
jokob-sk
ffc45c5a8d BE: AVAHISCAN -> zeroconf --mockdata
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 14:00:14 +11:00
jokob-sk
902e5360e5 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-10-10 13:48:50 +11:00
jokob-sk
0093441457 BE: AVAHISCAN -> zeroconf
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-10 13:48:39 +11:00
Jokob @NetAlertX
45fa9a4ca8 Merge pull request #1223 from JVKeller/patch-2
Update README.md
2025-10-10 11:46:59 +11:00
Adam Outler
be73e3a7f5 debian dockerfile completed properly. 2025-10-09 20:30:25 -04:00
Adam Outler
016a6adf42 Dockerfile.debian building and running 2025-10-08 19:55:16 -04:00
rell3k
5533beb76d Update README.md
Remove contend from copy block
2025-10-07 15:01:32 -04:00
Adam Outler
558ab44d3f Minimize differences between devcontainer and production 2025-10-06 23:31:20 +00:00
Jokob @NetAlertX
33093dba65 Merge pull request #1222 from JVKeller/patch-1
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Update HW_INSTALL.md
2025-10-07 08:36:05 +11:00
jokob-sk
81ac72bbd6 FE: UI_DEFAULT_PAGE_SIZE #1181
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-07 07:17:00 +11:00
rell3k
b5062f6838 Update HW_INSTALL.md
Adding new script.
2025-10-06 08:16:41 -04:00
jokob-sk
417081242f FE: UI_DEFAULT_PAGE_SIZE #1181
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-06 11:44:34 +11:00
jokob-sk
314b7e0974 weblate - Farsi - fa_fa + cleanup
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-06 11:21:33 +11:00
jokob-sk
41e9276ebb BE: multiedit 431 Request Header Fields Too Large #1219
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-06 09:38:31 +11:00
jokob-sk
333d23d704 FE: device name in tab title #1162
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-06 09:23:23 +11:00
jokob-sk
6e24d9b5f7 Better multiEdit logs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-06 08:59:48 +11:00
jokob-sk
d73a3ebe66 ARPSCAN docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 15:42:26 +11:00
jokob-sk
491c202eba ARPSCAN DURATION #1172
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 15:38:17 +11:00
jokob-sk
611911b5dd ICMP docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 15:37:34 +11:00
jokob-sk
e242de0ddf ARPSCAN DURATION #1172
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 15:37:17 +11:00
jokob-sk
086cd30355 Prevent Internet root node flipping w/ SYNC plugin enabled #1207
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 14:36:03 +11:00
jokob-sk
9b76f3c273 LOG_LEVEL not respected #1217
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 14:21:29 +11:00
jokob-sk
d05ddafdd3 logger not repsecting new lines #1217
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 14:02:00 +11:00
jokob-sk
bdaa53cc53 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
2025-10-05 08:09:03 +11:00
jokob-sk
b2428803a5 LOG_LEVEL not respected #1217
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-05 08:08:44 +11:00
Adam Outler
290b6c6f3b Remove nohup.out 2025-10-04 18:51:10 +00:00
Jokob @NetAlertX
fc72abca85 Merge pull request #1213 from gonzague/patch-1
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Fix install script references in HW_INSTALL.md
2025-10-04 11:38:05 +10:00
Jokob @NetAlertX
2b52d5aec4 Merge pull request #1216 from adamoutler/patch-4
Update timestamp format to use UTC timezone
2025-10-04 11:35:55 +10:00
Adam Outler
ada92715a8 all debugging online. 2025-10-03 22:12:42 +00:00
Adam Outler
ab3f9046d2 Update timestamp format to use UTC timezone
Remove deprecated API utilization.
2025-10-03 17:27:27 -04:00
Gonzague Dambricourt
521bf54123 Update HW_INSTALL.md
Fixing references to the Ubuntu install script
2025-10-03 10:40:03 +02:00
Adam Outler
1e04e9f571 Remove .git-placeholder, add dockerignore 2025-10-03 00:33:20 +00:00
Adam Outler
c81a054d89 Coderabit 2025-10-03 00:08:26 +00:00
Jokob @NetAlertX
42eae405ae Merge pull request #1212 from JVKeller/baremetal-installer
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Baremetal installer
2025-10-03 07:51:23 +10:00
Adam Outler
33aa8492bb Debugging operational in vscode 2025-10-02 21:19:29 +00:00
Jeff Keller
d7e6ff2688 Fix log permissions 2025-10-02 19:41:06 +00:00
Jeff Keller
b34269d043 Misc tweaks 2025-10-02 19:04:46 +00:00
Jeff Keller
683f4e6c2d Move clone before setting up python env 2025-10-02 18:53:37 +00:00
Jeff Keller
35cd8003b8 Fix logs 2025-10-02 18:38:00 +00:00
Jeff Keller
98d69e1ce8 Restart nginx 2025-10-02 18:17:43 +00:00
Jeff Keller
70d63febda Tweak log file paths 2025-10-02 18:14:51 +00:00
Jeff Keller
dd113f7940 testing 2025-10-02 16:45:59 +00:00
Jeff Keller
0aceb097ba Testing 2025-10-02 16:41:30 +00:00
Jeff Keller
7790530d08 Revert source repo 2025-10-02 16:05:31 +00:00
Jeff Keller
79cec583d9 NGINX configuration 2025-10-02 16:03:23 +00:00
rell3k
dd91d1e7da Merge branch 'jokob-sk:main' into baremetal-installer 2025-10-02 12:01:39 -04:00
Jeff Keller
aad5bec7e2 Single Debian/Ubuntu Installer 2025-10-02 16:00:19 +00:00
Jokob @NetAlertX
a9841157a7 Merge pull request #1211 from PreistlyPython/fix/issue-1210-compound-conditions
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
fix: Support compound conditions in SafeConditionBuilder (Issue #1210)
2025-10-02 16:11:30 +10:00
priestlypython
1c2721549b fix: Support compound conditions in SafeConditionBuilder (Issue #1210)
## Problem
PR #1182 introduced SafeConditionBuilder to prevent SQL injection, but it only
supported single-clause conditions. This broke notification filters using multiple
AND/OR clauses, causing user filters like:
`AND devLastIP NOT LIKE '192.168.50.%' AND devLastIP NOT LIKE '192.168.60.%'...`
to be rejected with "Unsupported condition pattern" errors.

## Root Cause
The `_parse_condition()` method used regex patterns that only matched single
conditions. When multiple clauses were chained, the entire string failed to match
any pattern and was rejected for security.

## Solution
Enhanced SafeConditionBuilder with compound condition support:

1. **Added `_is_compound_condition()`** - Detects multiple logical operators
   while respecting quoted strings

2. **Added `_parse_compound_condition()`** - Splits compound conditions into
   individual clauses and parses each one

3. **Added `_split_by_logical_operators()`** - Intelligently splits on AND/OR
   while preserving operators in quoted strings

4. **Refactored `_parse_condition()`** - Routes to compound or single parser

5. **Created `_parse_single_condition()`** - Handles individual clauses (from
   original `_parse_condition` logic)

## Testing
- Added comprehensive test suite (19 tests, 100% passing)
- Tested user's exact failing filter (6 AND clauses with NOT LIKE)
- Verified backward compatibility with single conditions
- Validated security (SQL injection attempts still blocked)
- Tested edge cases (mixed AND/OR, whitespace, empty conditions)

## Impact
-  Fixes reported issue #1210
-  Maintains all security protections from PR #1182
-  Backward compatible with existing single-clause filters
-  No breaking changes to API

Fixes #1210

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-01 18:31:49 -07:00
jokob-sk
4534ab053d TIMEZONE not respected in System Info -> System #1055
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-10-02 06:45:37 +10:00
Jeff Keller
cdee9b3b0d Permissions 2025-10-01 20:33:12 +00:00
Jeff Keller
55cfced3f6 Comment out line 2025-10-01 19:41:51 +00:00
Jeff Keller
af6394a334 Tweak permissions
Tighten security
2025-10-01 19:34:47 +00:00
Jeff Keller
d9ecffdd22 Cleanup 2025-10-01 19:09:49 +00:00
Jeff Keller
5f0a482556 bug fix 2025-10-01 18:58:05 +00:00
Jeff Keller
09c345796f fix typo 2025-10-01 18:33:44 +00:00
Jeff Keller
e7d067dd38 tweaks 2025-10-01 18:15:28 +00:00
Jeff Keller
223aa29d4d tweaks 2025-10-01 17:40:12 +00:00
rell3k
21e770a4bd Create netalertx.conf 2025-10-01 11:25:15 -04:00
Jeff Keller
c086ac3cf8 Merge Deb/Ubuntu 2025-10-01 15:22:21 +00:00
Adam Outler
0cd1dc8987 Scanning Operational with monitoring 2025-09-30 22:01:03 -04:00
Jeff Keller
f900f3f0d5 Resolve merge: keep proxmox installer and add README for Proxmox installer 2025-09-30 13:38:31 +00:00
Adam Outler
044035ef62 Devcontainer overlay 2025-09-30 01:55:26 +00:00
Adam Outler
dc4848acd0 Information on default config and entrypoints for debug 2025-09-28 21:59:06 -04:00
Adam Outler
c6efe5ac06 All services moved to deployed filesystem 2025-09-28 17:10:15 -04:00
Adam Outler
d182a552b8 Move filesystem to more generic name & add perms 2025-09-27 21:58:00 -04:00
Adam Outler
b47df7b33f capcheck 2025-09-27 19:48:36 -04:00
Adam Outler
46097bb6e8 solid hardened config 2025-09-27 19:15:07 -04:00
Adam Outler
c5d7480e6c Merge branch 'jokob-sk:main' into hardening 2025-09-27 09:00:46 -04:00
Adam Outler
2def3f1dac Validated launch on runner & hardend 2025-09-26 21:01:58 -04:00
Adam Outler
2419a268b2 updated devcontainer dockerfile 2025-09-26 17:52:17 +00:00
Adam Outler
bad67b2e69 fix dockerfile error 2025-09-26 17:52:11 +00:00
Adam Outler
178fb54bb4 Python up and debuggable 2025-09-26 17:32:58 +00:00
Adam Outler
b0a6f889aa Update gitignore 2025-09-26 17:14:20 +00:00
Adam Outler
798d2462d6 expand initial filesystem 2025-09-26 11:56:27 +00:00
Adam Outler
c228d45cea Devcontainer operational, services all down 2025-09-25 23:03:55 +00:00
Adam Outler
dfcc375fba Non-root launch 2025-09-25 14:10:06 -04:00
Adam Outler
8ed21a8c07 monolithic alpine container 2025-09-25 07:43:42 -04:00
Adam Outler
2e694a752d using 4 startup scripts instead of RC6 2025-09-24 19:46:11 -04:00
Adam Outler
29aa884836 architectural change 1 2025-09-24 16:29:15 -04:00
431 changed files with 140957 additions and 12286 deletions

BIN
.coverage

Binary file not shown.

View File

@@ -1,112 +1,280 @@
# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-dockerfile.sh
# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh
# ---/Dockerfile---
# The NetAlertX Dockerfile has 3 stages:
#
# Stage 1. Builder - NetAlertX Requires special tools and packages to build our virtual environment, but
# which are not needed in future stages. We build the builder and extract the venv for runner to use as
# a base.
#
# Stage 2. Runner builds the bare minimum requirements to create an operational NetAlertX. The primary
# reason for breaking at this stage is it leaves the system in a proper state for devcontainer operation
# This image also provides a break-out point for uses who wish to execute the anti-pattern of using a
# docker container as a VM for experimentation and various development patterns.
#
# Stage 3. Hardened removes root, sudoers, folders, permissions, and locks the system down into a read-only
# compatible image. While NetAlertX does require some read-write operations, this image can guarantee the
# code pushed out by the project is the only code which will run on the system after each container restart.
# It reduces the chance of system hijacking and operates with all modern security protocols in place as is
# expected from a security appliance.
#
# This file can be built with `docker-compose -f docker-compose.yml up --build --force-recreate`
FROM alpine:3.22 AS builder
ARG INSTALL_DIR=/app
ENV PYTHONUNBUFFERED=1
ENV PATH="/opt/venv/bin:$PATH"
# Install build dependencies
COPY requirements.txt /tmp/requirements.txt
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \
&& python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
# together makes for a slightly smaller image size.
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
chmod -R u-rwx,g-rwx /opt
RUN pip install openwrt-luci-rpc asusrouter asyncio aiohttp graphene flask flask-cors unifi-sm-api tplink-omada-client wakeonlan pycryptodome requests paho-mqtt scapy cron-converter pytz json2table dhcp-leases pyunifi speedtest-cli chardet python-nmap dnspython librouteros yattag git+https://github.com/foreign-sub/aiofreepybox.git
# Append Iliadbox certificate to aiofreepybox
# second stage
# second stage is the main runtime stage with just the minimum required to run the application
# The runner is used for both devcontainer, and as a base for the hardened stage.
FROM alpine:3.22 AS runner
ARG INSTALL_DIR=/app
COPY --from=builder /opt/venv /opt/venv
COPY --from=builder /usr/sbin/usermod /usr/sbin/groupmod /usr/sbin/
# NetAlertX app directories
ENV NETALERTX_APP=${INSTALL_DIR}
ENV NETALERTX_DATA=/data
ENV NETALERTX_CONFIG=${NETALERTX_DATA}/config
ENV NETALERTX_FRONT=${NETALERTX_APP}/front
ENV NETALERTX_PLUGINS=${NETALERTX_FRONT}/plugins
ENV NETALERTX_SERVER=${NETALERTX_APP}/server
ENV NETALERTX_API=/tmp/api
ENV NETALERTX_DB=${NETALERTX_DATA}/db
ENV NETALERTX_DB_FILE=${NETALERTX_DB}/app.db
ENV NETALERTX_BACK=${NETALERTX_APP}/back
ENV NETALERTX_LOG=/tmp/log
ENV NETALERTX_PLUGINS_LOG=${NETALERTX_LOG}/plugins
ENV NETALERTX_CONFIG_FILE=${NETALERTX_CONFIG}/app.conf
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
# NetAlertX log files
ENV LOG_IP_CHANGES=${NETALERTX_LOG}/IP_changes.log
ENV LOG_APP=${NETALERTX_LOG}/app.log
ENV LOG_APP_FRONT=${NETALERTX_LOG}/app_front.log
ENV LOG_REPORT_OUTPUT_TXT=${NETALERTX_LOG}/report_output.txt
ENV LOG_DB_IS_LOCKED=${NETALERTX_LOG}/db_is_locked.log
ENV LOG_REPORT_OUTPUT_HTML=${NETALERTX_LOG}/report_output.html
ENV LOG_STDERR=${NETALERTX_LOG}/stderr.log
ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
# default port and listen address
ENV PORT=20211 LISTEN_ADDR=0.0.0.0
# System Services configuration files
ENV ENTRYPOINT_CHECKS=/entrypoint.d
ENV SYSTEM_SERVICES=/services
ENV SYSTEM_SERVICES_SCRIPTS=${SYSTEM_SERVICES}/scripts
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
ENV SYSTEM_NGINX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
ENV SYSTEM_NGINX_CONFIG_TEMPLATE=${SYSTEM_NGINX_CONFIG}/netalertx.conf.template
ENV SYSTEM_SERVICES_CONFIG_CRON=${SYSTEM_SERVICES_CONFIG}/cron
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
ENV SYSTEM_SERVICES_ACTIVE_CONFIG_FILE=${SYSTEM_SERVICES_ACTIVE_CONFIG}/nginx.conf
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
ENV SYSTEM_SERVICES_RUN=/tmp/run
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
ENV PHP_FPM_CONFIG_FILE=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.conf
ENV READ_ONLY_FOLDERS="${NETALERTX_BACK} ${NETALERTX_FRONT} ${NETALERTX_SERVER} ${SYSTEM_SERVICES} \
${SYSTEM_SERVICES_CONFIG} ${ENTRYPOINT_CHECKS}"
ENV READ_WRITE_FOLDERS="${NETALERTX_DATA} ${NETALERTX_CONFIG} ${NETALERTX_DB} ${NETALERTX_API} \
${NETALERTX_LOG} ${NETALERTX_PLUGINS_LOG} ${SYSTEM_SERVICES_RUN} \
${SYSTEM_SERVICES_RUN_TMP} ${SYSTEM_SERVICES_RUN_LOG} \
${SYSTEM_SERVICES_ACTIVE_CONFIG}"
# needed for s6-overlay
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
#Python environment
ENV PYTHONUNBUFFERED=1
ENV VIRTUAL_ENV=/opt/venv
ENV VIRTUAL_ENV_BIN=/opt/venv/bin
ENV PYTHONPATH=${NETALERTX_APP}:${NETALERTX_SERVER}:${NETALERTX_PLUGINS}:${VIRTUAL_ENV}/lib/python3.12/site-packages
ENV PATH="${SYSTEM_SERVICES}:${VIRTUAL_ENV_BIN}:$PATH"
# ❗ IMPORTANT - if you modify this file modify the /install/install_dependecies.sh file as well ❗
RUN apk update --no-cache \
&& apk add --no-cache bash libbsd zip lsblk gettext-envsubst sudo mtr tzdata s6-overlay \
&& apk add --no-cache curl arp-scan iproute2 iproute2-ss nmap nmap-scripts traceroute nbtscan avahi avahi-tools openrc dbus net-tools net-snmp-tools bind-tools awake ca-certificates \
&& apk add --no-cache sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session \
&& apk add --no-cache python3 nginx \
&& ln -s /usr/bin/awake /usr/bin/wakeonlan \
&& rm -f /etc/nginx/http.d/default.conf
# App Environment
ENV LISTEN_ADDR=0.0.0.0
ENV PORT=20211
ENV NETALERTX_DEBUG=0
ENV VENDORSPATH=/app/back/ieee-oui.txt
ENV VENDORSPATH_NEWEST=${SYSTEM_SERVICES_RUN_TMP}/ieee-oui.txt
ENV ENVIRONMENT=alpine
ENV READ_ONLY_USER=readonly READ_ONLY_GROUP=readonly
ENV NETALERTX_USER=netalertx NETALERTX_GROUP=netalertx
ENV LANG=C.UTF-8
# Add crontab file
COPY --chmod=600 --chown=root:root install/crontab /etc/crontabs/root
RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \
nmap-scripts traceroute nbtscan net-tools net-snmp-tools bind-tools awake ca-certificates \
sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session python3 envsubst \
nginx supercronic shadow && \
rm -Rf /var/cache/apk/* && \
rm -Rf /etc/nginx && \
addgroup -g 20211 ${NETALERTX_GROUP} && \
adduser -u 20211 -D -h ${NETALERTX_APP} -G ${NETALERTX_GROUP} ${NETALERTX_USER} && \
apk del shadow
# Start all required services
HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=2 \
CMD curl -sf -o /dev/null ${LISTEN_ADDR}:${PORT}/php/server/query_json.php?file=app_state.json
ENTRYPOINT ["/init"]
# Install application, copy files, set permissions
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} install/production-filesystem/ /
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 back ${NETALERTX_BACK}
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 front ${NETALERTX_FRONT}
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 server ${NETALERTX_SERVER}
# Create required folders with correct ownership and permissions
RUN install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
sh -c "find ${NETALERTX_APP} -type f \( -name '*.sh' -o -name 'speedtest-cli' \) \
-exec chmod 750 {} \;"
# Copy version information into the image
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION
# Copy the virtualenv from the builder stage
COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
# Initialize each service with the dockerfiles/init-*.sh scripts, once.
# This is done after the copy of the venv to ensure the venv is in place
# although it may be quicker to do it before the copy, it keeps the image
# layers smaller to do it after.
RUN if [ -f '.VERSION' ]; then \
cp '.VERSION' "${NETALERTX_APP}/.VERSION"; \
else \
echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/.VERSION"; \
fi && \
chown 20212:20212 "${NETALERTX_APP}/.VERSION" && \
apk add --no-cache libcap && \
setcap cap_net_raw+ep /bin/busybox && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/arp-scan && \
setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip /usr/bin/nbtscan && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/traceroute && \
setcap cap_net_raw,cap_net_admin+eip "$(readlink -f ${VIRTUAL_ENV_BIN}/python)" && \
/bin/sh /build/init-nginx.sh && \
/bin/sh /build/init-php-fpm.sh && \
/bin/sh /build/init-cron.sh && \
/bin/sh /build/init-backend.sh && \
rm -rf /build && \
apk del libcap && \
date +%s > "${NETALERTX_FRONT}/buildtimestamp.txt"
ENTRYPOINT ["/bin/sh","/entrypoint.sh"]
# Final hardened stage to improve security by setting least possible permissions and removing sudo access.
# When complete, if the image is compromised, there's not much that can be done with it.
# This stage is separate from Runner stage so that devcontainer can use the Runner stage.
FROM runner AS hardened
ENV UMASK=0077
# Create readonly user and group with no shell access.
# Readonly user marks folders that are created by NetAlertX, but should not be modified.
# AI may claim this is stupid, but it's actually least possible permissions as
# read-only user cannot login, cannot sudo, has no write permission, and cannot even
# read the files it owns. The read-only user is ownership-as-a-lock hardening pattern.
RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \
adduser -u 20212 -G "${READ_ONLY_GROUP}" -D -h /app "${READ_ONLY_USER}"
# reduce permissions to minimum necessary for all NetAlertX files and folders
# Permissions 005 and 004 are not typos, they enable read-only. Everyone can
# read the read-only files, and nobody can write to them, even the readonly user.
# hadolint ignore=SC2114
RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
chmod -R 004 ${READ_ONLY_FOLDERS} && \
find ${READ_ONLY_FOLDERS} -type d -exec chmod 005 {} + && \
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
chown -R ${NETALERTX_USER}:${NETALERTX_GROUP} ${READ_WRITE_FOLDERS} && \
chmod -R 600 ${READ_WRITE_FOLDERS} && \
find ${READ_WRITE_FOLDERS} -type d -exec chmod 700 {} + && \
chown ${READ_ONLY_USER}:${READ_ONLY_GROUP} /entrypoint.sh /opt /opt/venv && \
chmod 005 /entrypoint.sh ${SYSTEM_SERVICES}/*.sh ${SYSTEM_SERVICES_SCRIPTS}/* ${ENTRYPOINT_CHECKS}/* /app /opt /opt/venv && \
for dir in ${READ_WRITE_FOLDERS}; do \
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 "$dir"; \
done && \
apk del apk-tools && \
rm -Rf /var /etc/sudoers.d/* /etc/shadow /etc/gshadow /etc/sudoers \
/lib/apk /lib/firmware /lib/modules-load.d /lib/sysctl.d /mnt /home/ /root \
/srv /media && \
sed -i "/^\(${READ_ONLY_USER}\|${NETALERTX_USER}\):/!d" /etc/passwd && \
sed -i "/^\(${READ_ONLY_GROUP}\|${NETALERTX_GROUP}\):/!d" /etc/group && \
printf '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
USER netalertx
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD /services/healthcheck.sh
# ---/resources/devcontainer-Dockerfile---
# Devcontainer build stage (do not build directly)
# This file is combined with the root /Dockerfile by
# .devcontainer/scripts/generate-dockerfile.sh
# .devcontainer/scripts/generate-configs.sh
# The generator appends this stage to produce .devcontainer/Dockerfile.
# Prefer to place dev-only setup here; use setup.sh only for runtime fixes.
# Permissions in devcontainer should be of a brutalist nature. They will be
# Open and wide to avoid permission issues during development allowing max
# flexibility.
FROM runner AS devcontainer
ENV INSTALL_DIR=/app
ENV PYTHONPATH=/workspaces/NetAlertX/test:/workspaces/NetAlertX/server:/app:/app/server:/opt/venv/lib/python3.12/site-packages
# hadolint ignore=DL3006
FROM runner AS netalertx-devcontainer
ENV INSTALL_DIR=/app
ENV PYTHONPATH=${PYTHONPATH}:/workspaces/NetAlertX/test:/workspaces/NetAlertX/server:/usr/lib/python3.12/site-packages
ENV PATH=/services:${PATH}
ENV PHP_INI_SCAN_DIR=/services/config/php/conf.d:/etc/php83/conf.d
ENV LISTEN_ADDR=0.0.0.0
ENV PORT=20211
ENV NETALERTX_DEBUG=1
ENV PYDEVD_DISABLE_FILE_VALIDATION=1
COPY .devcontainer/resources/devcontainer-overlay/ /
USER root
# Install common tools, create user, and set up sudo
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest pytest-cov && \
adduser -D -s /bin/sh netalertx && \
addgroup netalertx nginx && \
addgroup netalertx www-data && \
echo "netalertx ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-netalertx && \
chmod 440 /etc/sudoers.d/90-netalertx
# Install debugpy in the virtualenv if present, otherwise into system python3
RUN /bin/sh -c '(/opt/venv/bin/python3 -m pip install --no-cache-dir debugpy) || (python3 -m pip install --no-cache-dir debugpy) || true'
# setup nginx
COPY .devcontainer/resources/netalertx-devcontainer.conf /etc/nginx/http.d/netalert-frontend.conf
RUN set -e; \
chown netalertx:nginx /etc/nginx/http.d/netalert-frontend.conf; \
install -d -o netalertx -g www-data -m 775 /app; \
install -d -o netalertx -g www-data -m 755 /run/nginx; \
install -d -o netalertx -g www-data -m 755 /var/lib/nginx/logs; \
rm -f /var/lib/nginx/logs/* || true; \
for f in error access; do : > /var/lib/nginx/logs/$f.log; done; \
install -d -o netalertx -g www-data -m 777 /run/php; \
install -d -o netalertx -g www-data -m 775 /var/log/php; \
chown -R netalertx:www-data /etc/nginx/http.d; \
chmod -R 775 /etc/nginx/http.d; \
chown -R netalertx:www-data /var/lib/nginx; \
chmod -R 755 /var/lib/nginx && \
chown -R netalertx:www-data /var/log/nginx/ && \
sed -i '/^user /d' /etc/nginx/nginx.conf; \
sed -i 's|^error_log .*|error_log /dev/stderr warn;|' /etc/nginx/nginx.conf; \
sed -i 's|^access_log .*|access_log /dev/stdout main;|' /etc/nginx/nginx.conf; \
sed -i 's|error_log .*|error_log /dev/stderr warn;|g' /etc/nginx/http.d/*.conf 2>/dev/null || true; \
sed -i 's|access_log .*|access_log /dev/stdout main;|g' /etc/nginx/http.d/*.conf 2>/dev/null || true; \
mkdir -p /run/openrc; \
chown netalertx:nginx /run/openrc/; \
rm -Rf /run/openrc/*;
# setup pytest
RUN sudo /opt/venv/bin/python -m pip install -U pytest pytest-cov
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest \
pytest-cov zsh alpine-zsh-config shfmt github-cli py3-yaml py3-docker-py docker-cli docker-cli-buildx \
docker-cli-compose shellcheck
WORKDIR /workspaces/NetAlertX
# Install hadolint (Dockerfile linter)
RUN curl -L https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64 -o /usr/local/bin/hadolint && \
chmod +x /usr/local/bin/hadolint
RUN install -d -o netalertx -g netalertx -m 755 /services/php/modules && \
cp -a /usr/lib/php83/modules/. /services/php/modules/ && \
echo "${NETALERTX_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
ENV SHELL=/bin/zsh
ENTRYPOINT ["/bin/sh","-c","sleep infinity"]
RUN mkdir -p /workspaces && \
install -d -m 777 /data /data/config /data/db && \
install -d -m 777 /tmp/log /tmp/log/plugins /tmp/api /tmp/run /tmp/nginx && \
install -d -m 777 /tmp/nginx/active-config /tmp/nginx/client_body /tmp/nginx/config && \
install -d -m 777 /tmp/nginx/fastcgi /tmp/nginx/proxy /tmp/nginx/scgi /tmp/nginx/uwsgi && \
install -d -m 777 /tmp/run/tmp /tmp/run/logs && \
chmod 777 /workspaces && \
chown -R netalertx:netalertx /data && \
chmod 666 /data/config/app.conf /data/db/app.db && \
chmod 1777 /tmp && \
install -d -o root -g root -m 1777 /tmp/.X11-unix && \
mkdir -p /home/netalertx && \
chown netalertx:netalertx /home/netalertx && \
sed -i -e 's#/app:#/workspaces:#' /etc/passwd && \
find /opt/venv -type d -exec chmod o+rwx {} \;
USER netalertx
ENTRYPOINT ["/bin/sh","-c","sleep infinity"]

View File

@@ -0,0 +1,37 @@
{
"folders": [
{
"name": "NetAlertX Source",
"path": "/workspaces/NetAlertX"
},
{
"name": "💾 NetAlertX Data",
"path": "/data"
},
{
"name": "🔍 Active NetAlertX log",
"path": "/tmp/log"
},
{
"name": "🌐 Active NetAlertX nginx",
"path": "/tmp/nginx"
},
{
"name": "📊 Active NetAlertX api",
"path": "/tmp/api"
},
{
"name": "⚙️ Active NetAlertX run",
"path": "/tmp/run"
}
],
"settings": {
"terminal.integrated.suggest.enabled": true,
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/usr/bin/fish"
}
}
}
}

View File

@@ -19,6 +19,17 @@ Common workflows (F1->Tasks: Run Task)
- Backend (GraphQL/Flask): `.devcontainer/scripts/restart-backend.sh` starts it under debugpy and logs to `/app/log/app.log`
- Frontend (nginx + PHP-FPM): Started via setup.sh; can be restarted by the task "Start Frontend (nginx and PHP-FPM)".
Production Container Evaulation
1. F1 → Tasks: Shutdown services ([Dev Container] Stop Frontend & Backend Services)
2. F1 → Tasks: Docker system and build prune ([Any] Docker system and build Prune)
3. F1 → Remote: Close Unused Forwarded Ports (VS Code command)
4. F1 → Tasks: Build & Launch Production (Build & Launch Prodcution Docker
5. visit http://localhost:20211
Unit tests
1. F1 → Tasks: Rebuild test container ([Any] Build Unit Test Docker image)
2. F1 → Test: Run all tests
Testing
- pytest is installed via Alpine packages (py3-pytest, py3-pytest-cov).
- PYTHONPATH includes workspace and venv site-packages so tests can import `server/*` modules and third-party libs.

View File

@@ -0,0 +1,26 @@
# NetAlertX Multi-Folder Workspace
This repository uses a multi-folder workspace configuration to provide easy access to runtime directories.
## Opening the Multi-Folder Workspace
After the devcontainer builds, open the workspace file to access all folders:
1. **File****Open Workspace from File**
2. Select `NetAlertX.code-workspace`
Or use Command Palette (Ctrl+Shift+P / Cmd+Shift+P):
- Type: `Workspaces: Open Workspace from File`
- Select `NetAlertX.code-workspace`
## Workspace Folders
The workspace includes:
- **NetAlertX** - Main source code
- **/tmp** - Runtime temporary files
- **/tmp/api** - API response cache (JSON files)
- **/tmp/log** - Application and plugin logs
## Testing Configuration
Pytest is configured to only discover tests in the main `test/` directory, not in `/tmp` folders.

View File

@@ -1,27 +1,58 @@
{
"name": "NetAlertX DevContainer",
"remoteUser": "netalertx",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"target": "devcontainer"
},
"workspaceFolder": "/workspaces/NetAlertX",
"runArgs": [
"--add-host=host.docker.internal:host-gateway",
"--security-opt", "apparmor=unconfined" // for alowing ramdisk mounts
],
"workspaceMount": "source=${localWorkspaceFolder},target=/workspaces/NetAlertX,type=bind,consistency=cached",
"onCreateCommand": "mkdir -p /tmp/api /tmp/log",
"build": {
"dockerfile": "./Dockerfile", // Dockerfile generated by script
"context": "../", // Context is the root of the repository
"target": "netalertx-devcontainer"
},
"capAdd": [
"SYS_ADMIN", // For mounting ramdisks
"NET_ADMIN", // For network interface configuration
"NET_RAW" // For raw packet manipulation
"NET_RAW" // For raw packet manipulation
],
"postStartCommand": "${containerWorkspaceFolder}/.devcontainer/scripts/setup.sh",
"runArgs": [
"--security-opt",
"apparmor=unconfined", // for allowing ramdisk mounts
"--add-host=host.docker.internal:host-gateway"
// Uncomment --network=host to run full NetAlertX scanning capabilities of network scanning in
// container. This runs too slowly in a large network to be practical for development purposes.
// You can start services such as avahi on the host, in other containers within the network, or
// even within this container and connect to them as needed.
// "--network=host",
],
"mounts": [
"source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" //used for testing various conditions in docker
],
// ATTENTION: If running with --network=host, COMMENT `forwardPorts` OR ELSE THERE WILL BE NO WEBUI!
"forwardPorts": [20211, 20212, 5678],
"portsAttributes": { // the ports we care about
"20211": {
"label": "Frontend:Nginx+PHP"
},
"20212": {
"label": "Backend:GraphQL"
},
"9003": {
"label": "PHP Debug:Xdebug"
},
"5678": {
"label": "Python Debug:debugpy"
}
},
"postCreateCommand": {
"Install Pip Requirements": "/opt/venv/bin/pip3 install pytest docker debugpy",
"Workspace Instructions": "printf '\n\n<> DevContainer Ready!\n\n📁 To access /tmp folders in the workspace:\n File → Open Workspace from File → NetAlertX.code-workspace\n\n📖 See .devcontainer/WORKSPACE.md for details\n\n'"
},
"postStartCommand": {
"Start Environment":"${containerWorkspaceFolder}/.devcontainer/scripts/setup.sh",
"Build test-container":"echo building netalertx-test container in background. check /tmp/build.log for progress. && setsid docker buildx build -t netalertx-test . > /tmp/build.log 2>&1 &"
},
"customizations": {
"vscode": {
"extensions": [
@@ -33,17 +64,36 @@
"ms-python.vscode-pylance",
"pamaron.pytest-runner",
"coderabbit.coderabbit-vscode",
"ms-python.black-formatter"
]
,
"ms-python.black-formatter",
"jeff-hykin.better-dockerfile-syntax",
"GitHub.codespaces",
"ms-azuretools.vscode-containers",
"ms-python.vscode-python-envs",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"eamodio.gitlens",
"alexcvzz.vscode-sqlite",
"mkhl.shfmt",
"charliermarsh.ruff",
"ms-python.flake8",
"exiasr.hadolint",
"timonwong.shellcheck"
],
"settings": {
"terminal.integrated.cwd": "${containerWorkspaceFolder}",
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh",
"args": ["-l"]
}
},
"terminal.integrated.defaultProfile.linux": "zsh",
// Python testing configuration
"python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false,
"python.testing.pytestArgs": [
"test"
],
"python.testing.pytestArgs": ["test"],
"python.testing.cwd": "${containerWorkspaceFolder}",
// Make sure we discover tests and import server correctly
"python.analysis.extraPaths": [
"/workspaces/NetAlertX",
@@ -54,26 +104,6 @@
}
}
},
"forwardPorts": [5678, 9000, 9003, 20211, 20212],
"portsAttributes": {
"20211": {
"label": "Frontend:Nginx+PHP"
},
"20212": {
"label": "Backend:GraphQL"
},
"9003": {
"label": "PHP Debug:Xdebug"
},
"9000": {
"label": "PHP-FPM:FastCGI"
},
"5678": {
"label": "Python Debug:debugpy"
}
},
// Optional: ensures compose services are stopped when you close the window
"shutdownAction": "stopContainer"
}
"shutdownAction": "stopContainer" // stop container when VSCode is closed
}

View File

@@ -1,8 +0,0 @@
zend_extension="xdebug.so"
[xdebug]
xdebug.mode=develop,debug
xdebug.log_level=0
xdebug.client_host=host.docker.internal
xdebug.client_port=9003
xdebug.start_with_request=yes
xdebug.discover_client_host=1

View File

@@ -1,51 +1,55 @@
# Devcontainer build stage (do not build directly)
# This file is combined with the root /Dockerfile by
# .devcontainer/scripts/generate-dockerfile.sh
# .devcontainer/scripts/generate-configs.sh
# The generator appends this stage to produce .devcontainer/Dockerfile.
# Prefer to place dev-only setup here; use setup.sh only for runtime fixes.
# Permissions in devcontainer should be of a brutalist nature. They will be
# Open and wide to avoid permission issues during development allowing max
# flexibility.
FROM runner AS devcontainer
ENV INSTALL_DIR=/app
ENV PYTHONPATH=/workspaces/NetAlertX/test:/workspaces/NetAlertX/server:/app:/app/server:/opt/venv/lib/python3.12/site-packages
# hadolint ignore=DL3006
FROM runner AS netalertx-devcontainer
ENV INSTALL_DIR=/app
ENV PYTHONPATH=${PYTHONPATH}:/workspaces/NetAlertX/test:/workspaces/NetAlertX/server:/usr/lib/python3.12/site-packages
ENV PATH=/services:${PATH}
ENV PHP_INI_SCAN_DIR=/services/config/php/conf.d:/etc/php83/conf.d
ENV LISTEN_ADDR=0.0.0.0
ENV PORT=20211
ENV NETALERTX_DEBUG=1
ENV PYDEVD_DISABLE_FILE_VALIDATION=1
COPY .devcontainer/resources/devcontainer-overlay/ /
USER root
# Install common tools, create user, and set up sudo
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest pytest-cov && \
adduser -D -s /bin/sh netalertx && \
addgroup netalertx nginx && \
addgroup netalertx www-data && \
echo "netalertx ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-netalertx && \
chmod 440 /etc/sudoers.d/90-netalertx
# Install debugpy in the virtualenv if present, otherwise into system python3
RUN /bin/sh -c '(/opt/venv/bin/python3 -m pip install --no-cache-dir debugpy) || (python3 -m pip install --no-cache-dir debugpy) || true'
# setup nginx
COPY .devcontainer/resources/netalertx-devcontainer.conf /etc/nginx/http.d/netalert-frontend.conf
RUN set -e; \
chown netalertx:nginx /etc/nginx/http.d/netalert-frontend.conf; \
install -d -o netalertx -g www-data -m 775 /app; \
install -d -o netalertx -g www-data -m 755 /run/nginx; \
install -d -o netalertx -g www-data -m 755 /var/lib/nginx/logs; \
rm -f /var/lib/nginx/logs/* || true; \
for f in error access; do : > /var/lib/nginx/logs/$f.log; done; \
install -d -o netalertx -g www-data -m 777 /run/php; \
install -d -o netalertx -g www-data -m 775 /var/log/php; \
chown -R netalertx:www-data /etc/nginx/http.d; \
chmod -R 775 /etc/nginx/http.d; \
chown -R netalertx:www-data /var/lib/nginx; \
chmod -R 755 /var/lib/nginx && \
chown -R netalertx:www-data /var/log/nginx/ && \
sed -i '/^user /d' /etc/nginx/nginx.conf; \
sed -i 's|^error_log .*|error_log /dev/stderr warn;|' /etc/nginx/nginx.conf; \
sed -i 's|^access_log .*|access_log /dev/stdout main;|' /etc/nginx/nginx.conf; \
sed -i 's|error_log .*|error_log /dev/stderr warn;|g' /etc/nginx/http.d/*.conf 2>/dev/null || true; \
sed -i 's|access_log .*|access_log /dev/stdout main;|g' /etc/nginx/http.d/*.conf 2>/dev/null || true; \
mkdir -p /run/openrc; \
chown netalertx:nginx /run/openrc/; \
rm -Rf /run/openrc/*;
# setup pytest
RUN sudo /opt/venv/bin/python -m pip install -U pytest pytest-cov
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest \
pytest-cov zsh alpine-zsh-config shfmt github-cli py3-yaml py3-docker-py docker-cli docker-cli-buildx \
docker-cli-compose shellcheck
WORKDIR /workspaces/NetAlertX
# Install hadolint (Dockerfile linter)
RUN curl -L https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64 -o /usr/local/bin/hadolint && \
chmod +x /usr/local/bin/hadolint
RUN install -d -o netalertx -g netalertx -m 755 /services/php/modules && \
cp -a /usr/lib/php83/modules/. /services/php/modules/ && \
echo "${NETALERTX_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
ENV SHELL=/bin/zsh
ENTRYPOINT ["/bin/sh","-c","sleep infinity"]
RUN mkdir -p /workspaces && \
install -d -m 777 /data /data/config /data/db && \
install -d -m 777 /tmp/log /tmp/log/plugins /tmp/api /tmp/run /tmp/nginx && \
install -d -m 777 /tmp/nginx/active-config /tmp/nginx/client_body /tmp/nginx/config && \
install -d -m 777 /tmp/nginx/fastcgi /tmp/nginx/proxy /tmp/nginx/scgi /tmp/nginx/uwsgi && \
install -d -m 777 /tmp/run/tmp /tmp/run/logs && \
chmod 777 /workspaces && \
chown -R netalertx:netalertx /data && \
chmod 666 /data/config/app.conf /data/db/app.db && \
chmod 1777 /tmp && \
install -d -o root -g root -m 1777 /tmp/.X11-unix && \
mkdir -p /home/netalertx && \
chown netalertx:netalertx /home/netalertx && \
sed -i -e 's#/app:#/workspaces:#' /etc/passwd && \
find /opt/venv -type d -exec chmod o+rwx {} \;
USER netalertx
ENTRYPOINT ["/bin/sh","-c","sleep infinity"]

View File

@@ -0,0 +1,11 @@
zend_extension="/services/php/modules/xdebug.so"
extension_dir="/services/php/modules"
[xdebug]
xdebug.mode=develop,debug
xdebug.log=/app/log/xdebug.log
xdebug.log_level=7
xdebug.client_host=127.0.0.1
xdebug.client_port=9003
xdebug.start_with_request=yes
xdebug.discover_client_host=0

View File

@@ -0,0 +1 @@
-m debugpy --listen 0.0.0.0:5678

View File

@@ -0,0 +1,47 @@
# NetAlertX devcontainer zsh configuration
# Keep this lightweight and deterministic so shells behave consistently.
export PATH="$HOME/.local/bin:$PATH"
export EDITOR=vim
export SHELL=/bin/zsh
# Start inside the workspace if it exists
if [ -d "/workspaces/NetAlertX" ]; then
cd /workspaces/NetAlertX
fi
# Enable basic completion and prompt helpers
autoload -Uz compinit promptinit colors
colors
compinit -u
promptinit
# Friendly prompt with virtualenv awareness
setopt PROMPT_SUBST
_venv_segment() {
if [ -n "$VIRTUAL_ENV" ]; then
printf '(%s) ' "${VIRTUAL_ENV:t}"
fi
}
PROMPT='%F{green}$(_venv_segment)%f%F{cyan}%n@%m%f %F{yellow}%~%f %# '
RPROMPT='%F{magenta}$(git rev-parse --abbrev-ref HEAD 2>/dev/null)%f'
# Sensible defaults
setopt autocd
setopt correct
setopt extendedglob
HISTFILE="$HOME/.zsh_history"
HISTSIZE=5000
SAVEHIST=5000
alias ll='ls -alF'
alias la='ls -A'
alias gs='git status -sb'
alias gp='git pull --ff-only'
# Ensure pyenv/virtualenv activate hooks adjust the prompt cleanly
if [ -f "$HOME/.zshrc.local" ]; then
source "$HOME/.zshrc.local"
fi

View File

@@ -1,26 +0,0 @@
log_format netalertx '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log netalertx flush=1s;
error_log /var/log/nginx/error.log warn;
server {
listen 20211 default_server;
root /app/front;
index index.php;
add_header X-Forwarded-Prefix "/netalertx" always;
proxy_set_header X-Forwarded-Prefix "/netalertx";
location ~* \.php$ {
add_header Cache-Control "no-store";
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param PHP_VALUE "xdebug.remote_enable=1";
fastcgi_connect_timeout 75;
fastcgi_send_timeout 600;
fastcgi_read_timeout 600;
}
}

View File

@@ -0,0 +1,18 @@
#!/bin/bash
set -euo pipefail
if [[ -n "${CONFIRM_PRUNE:-}" && "${CONFIRM_PRUNE}" == "YES" ]]; then
reply="YES"
else
read -r -p "Are you sure you want to destroy your host docker containers and images? Type YES to continue: " reply
fi
if [[ "${reply}" == "YES" ]]; then
docker system prune -af
docker builder prune -af
else
echo "Aborted."
exit 1
fi
echo "Done."

View File

@@ -0,0 +1,34 @@
#!/bin/sh
# Generator for .devcontainer/Dockerfile
# Combines the root /Dockerfile (with some COPY lines removed) and
# the dev-only stage in .devcontainer/resources/devcontainer-Dockerfile.
# Run this script after modifying the resource Dockerfile to refresh
# the final .devcontainer/Dockerfile used by the devcontainer.
echo "Generating .devcontainer/Dockerfile"
SCRIPT_PATH=$(set -- "$0"; dirname -- "$1")
SCRIPT_DIR=$(cd "$SCRIPT_PATH" && pwd -P)
DEVCONTAINER_DIR="${SCRIPT_DIR%/scripts}"
ROOT_DIR="${DEVCONTAINER_DIR%/.devcontainer}"
OUT_FILE="${DEVCONTAINER_DIR}/Dockerfile"
echo "Adding base Dockerfile from $ROOT_DIR and merging to devcontainer-Dockerfile"
{
echo "# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh"
echo ""
echo "# ---/Dockerfile---"
cat "${ROOT_DIR}/Dockerfile"
echo ""
echo "# ---/resources/devcontainer-Dockerfile---"
echo ""
cat "${DEVCONTAINER_DIR}/resources/devcontainer-Dockerfile"
} > "$OUT_FILE"
echo "Generated $OUT_FILE using root dir $ROOT_DIR"
echo "Done."

View File

@@ -1,38 +0,0 @@
#!/bin/sh
# Generator for .devcontainer/Dockerfile
# Combines the root /Dockerfile (with some COPY lines removed) and
# the dev-only stage in .devcontainer/resources/devcontainer-Dockerfile.
# Run this script after modifying the resource Dockerfile to refresh
# the final .devcontainer/Dockerfile used by the devcontainer.
# Make a copy of the original Dockerfile to the .devcontainer folder
# but remove the COPY . ${INSTALL_DIR}/ command from it. This avoids
# overwriting /app (which uses symlinks to the workspace) and preserves
# debugging capabilities inside the devcontainer.
SCRIPT_DIR="$(CDPATH= cd -- "$(dirname -- "$0")" && pwd)"
DEVCONTAINER_DIR="${SCRIPT_DIR%/scripts}"
ROOT_DIR="${DEVCONTAINER_DIR%/.devcontainer}"
OUT_FILE="${DEVCONTAINER_DIR}/Dockerfile"
echo "# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-dockerfile.sh" > "$OUT_FILE"
echo "" >> "$OUT_FILE"
echo "# ---/Dockerfile---" >> "$OUT_FILE"
sed '/${INSTALL_DIR}/d' "${ROOT_DIR}/Dockerfile" >> "$OUT_FILE"
# sed the line https://github.com/foreign-sub/aiofreepybox.git \\ to remove trailing backslash
sed -i '/aiofreepybox.git/ s/ \\$//' "$OUT_FILE"
# don't cat the file, just copy it in because it doesn't exist at build time
sed -i 's|^ RUN cat ${INSTALL_DIR}/install/freebox_certificate.pem >> /opt/venv/lib/python3.12/site-packages/aiofreepybox/freebox_certificates.pem$| COPY install/freebox_certificate.pem /opt/venv/lib/python3.12/site-packages/aiofreepybox/freebox_certificates.pem |' "$OUT_FILE"
echo "" >> "$OUT_FILE"
echo "# ---/resources/devcontainer-Dockerfile---" >> "$OUT_FILE"
echo "" >> "$OUT_FILE"
cat "${DEVCONTAINER_DIR}/resources/devcontainer-Dockerfile" >> "$OUT_FILE"
echo "Generated $OUT_FILE using root dir $ROOT_DIR" >&2

View File

@@ -0,0 +1,8 @@
#!/bin/bash
if [ ! -d /workspaces/NetAlertX/.devcontainer ]; then
echo ---------------------------------------------------
echo "This script may only be run inside a devcontainer."
echo "Not in a devcontainer, exiting..."
echo ---------------------------------------------------
exit 255
fi

View File

@@ -1,26 +0,0 @@
#!/bin/sh
# Start (or restart) the NetAlertX Python backend under debugpy in background.
# This script is invoked by the VS Code task "Restart GraphQL".
# It exists to avoid complex inline command chains that were being mangled by the task runner.
set -e
LOG_DIR=/app/log
APP_DIR=/app/server
PY=python3
PORT_DEBUG=5678
# Kill any prior debug/run instances
sudo killall python3 2>/dev/null || true
sleep 2
echo ''|tee $LOG_DIR/stdout.log $LOG_DIR/stderr.log $LOG_DIR/app.log
cd "$APP_DIR"
# Launch using absolute module path for clarity; rely on cwd for local imports
setsid nohup "${PY}" -m debugpy --listen "0.0.0.0:${PORT_DEBUG}" /app/server/__main__.py \
1>>"$LOG_DIR/stdout.log" \
2>>"$LOG_DIR/stderr.log" &
PID=$!
sleep 2

View File

@@ -1,200 +1,104 @@
#! /bin/bash
# Runtime setup for devcontainer (executed after container starts).
# Prefer building setup into resources/devcontainer-Dockerfile when possible.
# Use this script for runtime-only adjustments (permissions, sockets, ownership,
# and services managed without init) that are difficult at build time.
id
# Define variables (paths, ports, environment)
export APP_DIR="/app"
export APP_COMMAND="/workspaces/NetAlertX/.devcontainer/scripts/restart-backend.sh"
export PHP_FPM_BIN="/usr/sbin/php-fpm83"
export NGINX_BIN="/usr/sbin/nginx"
export CROND_BIN="/usr/sbin/crond -f"
export ALWAYS_FRESH_INSTALL=false
export INSTALL_DIR=/app
export APP_DATA_LOCATION=/app/config
export APP_CONFIG_LOCATION=/app/config
export LOGS_LOCATION=/app/logs
export CONF_FILE="app.conf"
export NGINX_CONF_FILE=netalertx.conf
export DB_FILE="app.db"
export FULL_FILEDB_PATH="${INSTALL_DIR}/db/${DB_FILE}"
export NGINX_CONFIG_FILE="/etc/nginx/http.d/${NGINX_CONF_FILE}"
export OUI_FILE="/usr/share/arp-scan/ieee-oui.txt" # Define the path to ieee-oui.txt and ieee-iab.txt
export TZ=Europe/Paris
export PORT=20211
export SOURCE_DIR="/workspaces/NetAlertX"
main() {
echo "=== NetAlertX Development Container Setup ==="
echo "Setting up ${SOURCE_DIR}..."
configure_source
echo "--- Starting Development Services ---"
configure_php
start_services
}
# safe_link: create a symlink from source to target, removing existing target if necessary
# bypassing the default behavior of symlinking the directory into the target directory if it is a directory
safe_link() {
# usage: safe_link <source> <target>
local src="$1"
local dst="$2"
# Ensure parent directory exists
install -d -m 775 "$(dirname "$dst")" >/dev/null 2>&1 || true
# If target exists, remove it without dereferencing symlinks
if [ -L "$dst" ] || [ -e "$dst" ]; then
rm -rf "$dst"
fi
# Create link; -n prevents deref, -f replaces if somehow still exists
ln -sfn "$src" "$dst"
}
# Setup source directory
configure_source() {
echo "[1/3] Configuring Source..."
echo " -> Linking source to ${INSTALL_DIR}"
echo "Dev">${INSTALL_DIR}/.VERSION
echo " -> Mounting ramdisks for /log and /api"
sudo mount -t tmpfs -o size=256M tmpfs "${SOURCE_DIR}/log"
sudo mount -t tmpfs -o size=512M tmpfs "${SOURCE_DIR}/api"
safe_link ${SOURCE_DIR}/api ${INSTALL_DIR}/api
safe_link ${SOURCE_DIR}/back ${INSTALL_DIR}/back
safe_link "${SOURCE_DIR}/config" "${INSTALL_DIR}/config"
safe_link "${SOURCE_DIR}/db" "${INSTALL_DIR}/db"
if [ ! -f "${SOURCE_DIR}/config/app.conf" ]; then
cp ${SOURCE_DIR}/back/app.conf ${INSTALL_DIR}/config/
cp ${SOURCE_DIR}/back/app.db ${INSTALL_DIR}/db/
fi
safe_link "${SOURCE_DIR}/docs" "${INSTALL_DIR}/docs"
safe_link "${SOURCE_DIR}/front" "${INSTALL_DIR}/front"
safe_link "${SOURCE_DIR}/install" "${INSTALL_DIR}/install"
safe_link "${SOURCE_DIR}/scripts" "${INSTALL_DIR}/scripts"
safe_link "${SOURCE_DIR}/server" "${INSTALL_DIR}/server"
safe_link "${SOURCE_DIR}/test" "${INSTALL_DIR}/test"
safe_link "${SOURCE_DIR}/log" "${INSTALL_DIR}/log"
safe_link "${SOURCE_DIR}/mkdocs.yml" "${INSTALL_DIR}/mkdocs.yml"
echo " -> Copying static files to ${INSTALL_DIR}"
cp -R ${SOURCE_DIR}/CODE_OF_CONDUCT.md ${INSTALL_DIR}/
cp -R ${SOURCE_DIR}/dockerfiles ${INSTALL_DIR}/dockerfiles
sudo cp -na "${INSTALL_DIR}/back/${CONF_FILE}" "${INSTALL_DIR}/config/${CONF_FILE}"
sudo cp -na "${INSTALL_DIR}/back/${DB_FILE}" "${FULL_FILEDB_PATH}"
if [ -e "${INSTALL_DIR}/api/user_notifications.json" ]; then
echo " -> Removing existing user_notifications.json"
sudo rm "${INSTALL_DIR}"/api/user_notifications.json
fi
echo " -> Setting ownership and permissions"
sudo find ${INSTALL_DIR}/ -type d -exec chmod 775 {} \;
sudo find ${INSTALL_DIR}/ -type f -exec chmod 664 {} \;
sudo date +%s > "${INSTALL_DIR}/front/buildtimestamp.txt"
sudo chmod 640 "${INSTALL_DIR}/config/${CONF_FILE}" || true
echo " -> Setting up log directory"
install -d -o netalertx -g www-data -m 777 ${INSTALL_DIR}/log/plugins
echo " -> Empty log"|tee ${INSTALL_DIR}/log/app.log \
${INSTALL_DIR}/log/app_front.log \
${INSTALL_DIR}/log/stdout.log
touch ${INSTALL_DIR}/log/stderr.log \
${INSTALL_DIR}/log/execution_queue.log
echo 0>${INSTALL_DIR}/log/db_is_locked.log
date +%s > /app/front/buildtimestamp.txt
killall python &>/dev/null
sleep 1
}
#!/bin/bash
# NetAlertX Devcontainer Setup Script
#
# This script forcefully resets all runtime state for a single-user devcontainer.
# It is intentionally idempotent: every run wipes and recreates all relevant folders,
# symlinks, and files, so the environment is always fresh and predictable.
#
# - No conditional logic: everything is (re)created, overwritten, or reset unconditionally.
# - No security hardening: this is for disposable, local dev use only.
# - No checks for existing files, mounts, or processes—just do the work.
#
# If you add new runtime files or folders, add them to the creation/reset section below.
#
# Do not add if-then logic or error handling for missing/existing files. Simplicity is the goal.
# start_services: start crond, PHP-FPM, nginx and the application
start_services() {
echo "[3/3] Starting services..."
killall nohup &>/dev/null || true
SOURCE_DIR=${SOURCE_DIR:-/workspaces/NetAlertX}
PY_SITE_PACKAGES="${VIRTUAL_ENV:-/opt/venv}/lib/python3.12/site-packages"
killall php-fpm83 &>/dev/null || true
killall crond &>/dev/null || true
# Give the OS a moment to release the php-fpm socket
sleep 0.3
echo " -> Starting CronD"
setsid nohup $CROND_BIN &>/dev/null &
LOG_FILES=(
LOG_APP
LOG_APP_FRONT
LOG_STDOUT
LOG_STDERR
LOG_EXECUTION_QUEUE
LOG_APP_PHP_ERRORS
LOG_IP_CHANGES
LOG_CRON
LOG_REPORT_OUTPUT_TXT
LOG_REPORT_OUTPUT_HTML
LOG_REPORT_OUTPUT_JSON
LOG_DB_IS_LOCKED
LOG_NGINX_ERROR
)
echo " -> Starting PHP-FPM"
setsid nohup $PHP_FPM_BIN &>/dev/null &
sudo chmod 666 /var/run/docker.sock 2>/dev/null || true
sudo chown "$(id -u)":"$(id -g)" /workspaces
sudo chmod 755 /workspaces
sudo killall nginx &>/dev/null || true
# Wait for the previous nginx processes to exit and for the port to free up
tries=0
while ss -ltn | grep -q ":${PORT}[[:space:]]" && [ $tries -lt 10 ]; do
echo " -> Waiting for port ${PORT} to free..."
sleep 0.2
tries=$((tries+1))
done
sleep 0.2
echo " -> Starting Nginx"
setsid nohup $NGINX_BIN &>/dev/null &
echo " -> Starting Backend ${APP_DIR}/server..."
$APP_COMMAND
sleep 2
}
killall php-fpm83 nginx crond python3 2>/dev/null || true
# configure_php: configure PHP-FPM and enable dev debug options
configure_php() {
echo "[2/3] Configuring PHP-FPM..."
sudo killall php-fpm83 &>/dev/null || true
install -d -o nginx -g www-data /run/php/ &>/dev/null
sudo sed -i "/^;pid/c\pid = /run/php/php8.3-fpm.pid" /etc/php83/php-fpm.conf
sudo sed -i 's|^listen = .*|listen = 127.0.0.1:9000|' /etc/php83/php-fpm.d/www.conf
sudo sed -i 's|fastcgi_pass .*|fastcgi_pass 127.0.0.1:9000;|' /etc/nginx/http.d/*.conf
# Mount ramdisks for volatile data
sudo mount -t tmpfs -o size=100m,mode=0777 tmpfs /tmp/log 2>/dev/null || true
sudo mount -t tmpfs -o size=50m,mode=0777 tmpfs /tmp/api 2>/dev/null || true
sudo mount -t tmpfs -o size=50m,mode=0777 tmpfs /tmp/run 2>/dev/null || true
sudo mount -t tmpfs -o size=50m,mode=0777 tmpfs /tmp/nginx 2>/dev/null || true
#increase max child process count to 10
sudo sed -i -e 's/pm.max_children = 5/pm.max_children = 10/' /etc/php83/php-fpm.d/www.conf
# find any line in php-fmp that starts with either ;error_log or error_log = and replace it with error_log = /app/log/app.php_errors.log
sudo sed -i '/^;*error_log\s*=/c\error_log = /app/log/app.php_errors.log' /etc/php83/php-fpm.conf
# If the line was not found, append it to the end of the file
if ! grep -q '^error_log\s*=' /etc/php83/php-fpm.conf; then
echo 'error_log = /app/log/app.php_errors.log' | sudo tee -a /etc/php83/php-fpm.conf
fi
sudo mkdir -p /etc/php83/conf.d
sudo cp /workspaces/NetAlertX/.devcontainer/resources/99-xdebug.ini /etc/php83/conf.d/99-xdebug.ini
sudo rm -R /var/log/php83 &>/dev/null || true
install -d -o netalertx -g www-data -m 755 var/log/php83;
sudo chmod 644 /etc/php83/conf.d/99-xdebug.ini || true
}
# (duplicate start_services removed)
sudo chmod 777 /tmp/log /tmp/api /tmp/run /tmp/nginx
echo "$(git rev-parse --short=8 HEAD)">/app/.VERSION
# Run the main function
main
sudo rm -rf /entrypoint.d
sudo ln -s "${SOURCE_DIR}/install/production-filesystem/entrypoint.d" /entrypoint.d
sudo rm -rf "${NETALERTX_APP}"
sudo ln -s "${SOURCE_DIR}/" "${NETALERTX_APP}"
for dir in "${NETALERTX_DATA}" "${NETALERTX_CONFIG}" "${NETALERTX_DB}"; do
sudo install -d -m 777 "${dir}"
done
for dir in \
"${SYSTEM_SERVICES_RUN_LOG}" \
"${SYSTEM_SERVICES_ACTIVE_CONFIG}" \
"${NETALERTX_PLUGINS_LOG}" \
"${SYSTEM_SERVICES_RUN_TMP}" \
"/tmp/nginx/client_body" \
"/tmp/nginx/proxy" \
"/tmp/nginx/fastcgi" \
"/tmp/nginx/uwsgi" \
"/tmp/nginx/scgi"; do
sudo install -d -m 777 "${dir}"
done
for var in "${LOG_FILES[@]}"; do
path=${!var}
dir=$(dirname "${path}")
sudo install -d -m 777 "${dir}"
touch "${path}"
done
printf '0\n' | sudo tee "${LOG_DB_IS_LOCKED}" >/dev/null
sudo chmod 777 "${LOG_DB_IS_LOCKED}"
sudo pkill -f python3 2>/dev/null || true
sudo chmod 777 "${PY_SITE_PACKAGES}" "${NETALERTX_DATA}" "${NETALERTX_DATA}"/* 2>/dev/null || true
sudo chmod 005 "${PY_SITE_PACKAGES}" 2>/dev/null || true
sudo chown -R "${NETALERTX_USER}:${NETALERTX_GROUP}" "${NETALERTX_APP}"
date +%s | sudo tee "${NETALERTX_FRONT}/buildtimestamp.txt" >/dev/null
sudo chmod 755 "${NETALERTX_APP}"
sudo chmod +x /entrypoint.sh
setsid bash /entrypoint.sh &
sleep 1
echo "Development $(git rev-parse --short=8 HEAD)" | sudo tee "${NETALERTX_APP}/.VERSION" >/dev/null

View File

@@ -1,40 +0,0 @@
#!/bin/sh
# Stream NetAlertX logs to stdout so the Dev Containers output channel shows them.
# This script waits briefly for the files to appear and then tails them with -F.
LOG_FILES="/app/log/app.log /app/log/db_is_locked.log /app/log/execution_queue.log /app/log/app_front.log /app/log/app.php_errors.log /app/log/IP_changes.log /app/stderr.log /app/stdout.log"
wait_for_files() {
# Wait up to ~10s for at least one of the files to exist
attempts=0
while [ $attempts -lt 20 ]; do
for f in $LOG_FILES; do
if [ -f "$f" ]; then
return 0
fi
done
attempts=$((attempts+1))
sleep 0.5
done
return 1
}
if wait_for_files; then
echo "Starting log stream for:"
for f in $LOG_FILES; do
[ -f "$f" ] && echo " $f"
done
# Use tail -F where available. If tail -F isn't supported, tail -f is used as fallback.
# Some minimal images may have busybox tail without -F; this handles both.
if tail --version >/dev/null 2>&1; then
# GNU tail supports -F
tail -n +1 -F $LOG_FILES
else
# Fallback to -f for busybox; will exit if files rotate or do not exist initially
tail -n +1 -f $LOG_FILES
fi
else
echo "No log files appeared after wait; exiting stream script."
exit 0
fi

View File

@@ -1,11 +0,0 @@
zend_extension=xdebug.so
xdebug.mode=debug
xdebug.start_with_request=trigger
xdebug.trigger_value=VSCODE
xdebug.client_host=host.docker.internal
xdebug.client_port=9003
xdebug.log=/var/log/xdebug.log
xdebug.log_level=7
xdebug.idekey=VSCODE
xdebug.discover_client_host=true
xdebug.max_nesting_level=512

View File

@@ -1,8 +1,8 @@
.dockerignore
**/.dockerignore
.env
.git
.github
.gitignore
docker-compose.yml
Dockerfile
Dockerfile.debian

3
.flake8 Normal file
View File

@@ -0,0 +1,3 @@
[flake8]
max-line-length = 180
ignore = E221,E222,E251,E203

View File

@@ -44,9 +44,9 @@ body:
required: false
- type: textarea
attributes:
label: app.conf
label: Relevant `app.conf` settings
description: |
Paste your `app.conf` (remove personal info)
Paste relevant `app.conf`settings (remove sensitive info)
render: python
validations:
required: false
@@ -55,7 +55,7 @@ body:
label: docker-compose.yml
description: |
Paste your `docker-compose.yml`
render: python
render: yaml
validations:
required: false
- type: dropdown
@@ -70,21 +70,37 @@ body:
- Bare-metal (community only support - Check Discord)
validations:
required: true
- type: checkboxes
attributes:
label: Debug or Trace enabled
description: I confirm I set `LOG_LEVEL` to `debug` or `trace`
options:
- label: I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
required: true
- type: textarea
attributes:
label: app.log
label: Relevant `app.log` section
value: |
```
PASTE LOG HERE. Using the triple backticks preserves format.
```
description: |
Logs with debug enabled (https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md) ⚠
***Generally speaking, all bug reports should have logs provided.***
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files.
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files or send them to netalertx@gmail.com with the issue number.
validations:
required: false
- type: checkboxes
- type: textarea
attributes:
label: Debug enabled
description: I confirm I enabled `debug`
options:
- label: I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
required: true
label: Docker Logs
description: |
You can retrieve the logs from Portainer -> Containers -> your NetAlertX container -> Logs or by running `sudo docker logs netalertx`.
value: |
```
PASTE DOCKER LOG HERE. Using the triple backticks preserves format.
```
validations:
required: true

View File

@@ -1,6 +1,7 @@
This is NetAlertX — network monitoring & alerting.
# NetAlertX AI Assistant Instructions
This is NetAlertX — network monitoring & alerting. NetAlertX provides Network inventory, awareness, insight, categorization, intruder and presence detection. This is a heavily community-driven project, welcoming of all contributions.
Purpose: Guide AI assistants to follow NetAlertX architecture, conventions, and safety practices. Be concise, opinionated, and prefer existing helpers/settings over new code or hardcoded values.
You are expected to be concise, opinionated, and biased toward security and simplicity.
## Architecture (what runs where)
- Backend (Python): main loop + GraphQL/REST endpoints orchestrate scans, plugins, workflows, notifications, and JSON export.
@@ -17,7 +18,7 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
## Plugin patterns that matter
- Manifest lives at `front/plugins/<code_name>/config.json`; `code_name` == folder, `unique_prefix` drives settings and filenames (e.g., `ARPSCAN`).
- Control via settings: `<PREF>_RUN` (phase), `<PREF>_RUN_SCHD` (cron-like), `<PREF>_CMD` (script path), `<PREF>_RUN_TIMEOUT`, `<PREF>_WATCH` (diff columns).
- Data contract: scripts write `/app/log/plugins/last_result.<PREF>.log` (pipedelimited: 9 required cols + optional 4). Use `front/plugins/plugin_helper.py`s `Plugin_Objects` to sanitize text and normalize MACs, then `write_result_file()`.
- Data contract: scripts write `/tmp/log/plugins/last_result.<PREF>.log` (pipedelimited: 9 required cols + optional 4). Use `front/plugins/plugin_helper.py`s `Plugin_Objects` to sanitize text and normalize MACs, then `write_result_file()`.
- Device import: define `database_column_definitions` when creating/updating devices; watched fields trigger notifications.
### Standard Plugin Formats
@@ -29,9 +30,10 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
* other: Miscellaneous plugins. Runs at various times. Data source: self / Template.
### Plugin logging & outputs
- Always log via `mylog()` like other plugins do (no `print()`). Example: `mylog('verbose', [f'[{pluginName}] In script'])`.
- Always check relevant logs first.
- Use logging as shown in other plugins.
- Collect results with `Plugin_Objects.add_object(...)` during processing and call `plugin_objects.write_result_file()` exactly once at the end of the script.
- Prefer to log a brief summary before writing (e.g., total objects added) to aid troubleshooting; keep logs concise at `verbose` level unless debugging.
- Prefer to log a brief summary before writing (e.g., total objects added) to aid troubleshooting; keep logs concise at `info` level and use `verbose` or `debug` for extra context.
- Do not write adhoc files for results; the only consumable output is `last_result.<PREF>.log` generated by `Plugin_Objects`.
## API/Endpoints quick map
@@ -41,22 +43,49 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
## Conventions & helpers to reuse
- Settings: add/modify via `ccd()` in `server/initialise.py` or perplugin manifest. Never hardcode ports or secrets; use `get_setting_value()`.
- Logging: use `logger.mylog(level, [message])`; levels: none/minimal/verbose/debug/trace.
- Time/MAC/strings: `helper.py` (`timeNowTZ`, `normalize_mac`, sanitizers). Validate MACs before DB writes.
- Time/MAC/strings: `helper.py` (`timeNowDB`, `normalize_mac`, sanitizers). Validate MACs before DB writes.
- DB helpers: prefer `server/db/db_helper.py` functions (e.g., `get_table_json`, device condition helpers) over raw SQL in new paths.
## Dev workflow (devcontainer)
- **Devcontainer philosophy: brutal simplicity.** One user, everything writable, completely idempotent. No permission checks, no conditional logic, no sudo needed. If something doesn't work, tear down the wall and rebuild - don't patch. We unit test permissions in the hardened build.
- **Permissions:** Never `chmod` or `chown` during operations. Everything is already writable. If you need permissions, the devcontainer setup is broken - fix `.devcontainer/scripts/setup.sh` or `.devcontainer/resources/devcontainer-Dockerfile` instead.
- **Files & Paths:** Use environment variables (`NETALERTX_DB`, `NETALERTX_LOG`, etc.) everywhere. `/data` for persistent config/db, `/tmp` for runtime logs/api/nginx state. Never hardcode `/data/db` or relative paths.
- **Database reset:** Use the `[Dev Container] Wipe and Regenerate Database` task. Kills backend, deletes `/data/{db,config}/*`, runs first-time setup scripts. Clean slate, no questions.
- Services: use tasks to (re)start backend and nginx/PHP-FPM. Backend runs with debugpy on 5678; attach a Python debugger if needed.
- Run a plugin manually: `python3 front/plugins/<code_name>/script.py` (ensure `sys.path` includes `/app/front/plugins` and `/app/server` like the template).
- Testing: pytest available via Alpine packages. Tests live in `test/`; app code is under `server/`. PYTHONPATH is preconfigured to include workspace and `/opt/venv` sitepackages.
- **Subprocess calls:** ALWAYS set explicit timeouts. Default to 60s minimum unless plugin config specifies otherwise. Nested subprocess calls (e.g., plugins calling external tools) need their own timeout - outer plugin timeout won't save you.
## What “done right” looks like
- When adding a plugin, start from `front/plugins/__template`, implement with `plugin_helper`, define manifest settings, and wire phase via `<PREF>_RUN`. Verify logs in `/app/log/plugins/` and data in `api/*.json`.
- When adding a plugin, start from `front/plugins/__template`, implement with `plugin_helper`, define manifest settings, and wire phase via `<PREF>_RUN`. Verify logs in `/tmp/log/plugins/` and data in `api/*.json`.
- When introducing new config, define it once (core `ccd()` or plugin manifest) and read it via helpers everywhere.
- When exposing new server functionality, add endpoints in `server/api_server/*` and keep authorization consistent; update UI by reading/writing JSON cache rather than bypassing the pipeline.
## Useful references
- Docs: `docs/PLUGINS_DEV.md`, `docs/SETTINGS_SYSTEM.md`, `docs/API_*.md`, `docs/DEBUG_*.md`
- Logs: backend `/app/log/app.log`, plugin logs under `/app/log/plugins/`, nginx/php logs under `/var/log/*`
- Logs: All logs are under `/tmp/log/`. Plugin logs are very shortly under `/tmp/log/plugins/` until picked up by the server.
- plugin logs: `/tmp/log/app.log`
- backend logs: `/tmp/log/stdout.log` and `/tmp/log/stderr.log`
- frontend commands logs: `/tmp/log/app_front.log`
- php errors: `/tmp/log/app.php_errors.log`
- nginx logs: `/tmp/log/nginx-access.log` and `/tmp/log/nginx-error.log`
## Assistant expectations:
- Be concise, opinionated, and biased toward security and simplicity.
- Reference concrete files/paths/environmental variables.
- Use existing helpers/settings.
- Offer a quick validation step (log line, API hit, or JSON export) for anything you add.
- Be blunt about risks and when you offer suggestions ensure they're also blunt,
- Ask for confirmation before making changes that run code or change multiple files.
- Make statements actionable and specific; propose exact edits.
- Request confirmation before applying changes that affect more than a single, clearly scoped line or file.
- Ask the user to debug something for an actionable value if you're unsure.
- Be sure to offer choices when appropriate.
- Always understand the intent of the user's request and undo/redo as needed.
- Above all, use the simplest possible code that meets the need so it can be easily audited and maintained.
- Always leave logging enabled. If there is a possiblity it will be difficult to debug with current logging, add more logging.
- Always run the testFailure tool before executing any tests to gather current failure information and avoid redundant runs.
- Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first.
- Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the exisiting testFailures and runTests results.
- Environment tools are designed specifically for your use in this project and running them in this order will give you the best results.
Assistant expectations
- Reference concrete files/paths. Use existing helpers/settings. Keep changes idempotent and safe. Offer a quick validation step (log line, API hit, or JSON export) for anything you add.

View File

@@ -21,7 +21,8 @@ jobs:
run: |
echo "🔍 Checking for incorrect absolute '/php/' URLs (should be 'php/' or './php/')..."
MATCHES=$(grep -rE "['\"]\/php\/" --include=\*.{js,php,html} ./front | grep -E "\.get|\.post|\.ajax|fetch|url\s*:") || true
MATCHES=$(grep -rE "['\"]/php/" --include=\*.{js,php,html} ./front \
| grep -E "\.get|\.post|\.ajax|fetch|url\s*:") || true
if [ -n "$MATCHES" ]; then
echo "$MATCHES"
@@ -39,3 +40,60 @@ jobs:
echo "🔍 Checking Python syntax..."
find . -name "*.py" -print0 | xargs -0 -n1 python3 -m py_compile
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install linting tools
run: |
# Python linting
pip install flake8
# Docker linting
wget -O /tmp/hadolint https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64
chmod +x /tmp/hadolint
# PHP and shellcheck for syntax checking
sudo apt-get update && sudo apt-get install -y php-cli shellcheck
- name: Shell check
continue-on-error: true
run: |
echo "🔍 Checking shell scripts..."
find . -name "*.sh" -exec shellcheck {} \;
- name: Python lint
continue-on-error: true
run: |
echo "🔍 Linting Python code..."
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: PHP check
continue-on-error: true
run: |
echo "🔍 Checking PHP syntax..."
find . -name "*.php" -exec php -l {} \;
- name: Docker lint
continue-on-error: true
run: |
echo "🔍 Linting Dockerfiles..."
/tmp/hadolint --config .hadolint.yaml Dockerfile* || true
docker-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Docker-based tests
run: |
echo "🐳 Running Docker-based tests..."
chmod +x ./test/docker_tests/run_docker_tests.sh
./test/docker_tests/run_docker_tests.sh

View File

@@ -10,7 +10,7 @@ on:
branches:
- main
jobs:
jobs:
docker_dev:
runs-on: ubuntu-latest
timeout-minutes: 30
@@ -19,7 +19,8 @@ jobs:
packages: write
if: >
contains(github.event.head_commit.message, 'PUSHPROD') != 'True' &&
github.repository == 'jokob-sk/NetAlertX'
github.repository == 'jokob-sk/NetAlertX'
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -30,26 +31,36 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# --- Generate timestamped dev version
- name: Generate timestamp version
id: timestamp
run: |
ts=$(date -u +'%Y%m%d-%H%M%S')
echo "version=dev-${ts}" >> $GITHUB_OUTPUT
echo "Generated version: dev-${ts}"
- name: Set up dynamic build ARGs
id: getargs
id: getargs
run: echo "version=$(cat ./stable/VERSION)" >> $GITHUB_OUTPUT
- name: Get release version
id: get_version
run: echo "version=Dev" >> $GITHUB_OUTPUT
# --- Write the timestamped version to .VERSION file
- name: Create .VERSION file
run: echo "${{ steps.get_version.outputs.version }}" >> .VERSION
run: echo "${{ steps.timestamp.outputs.version }}" > .VERSION
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/jokob-sk/netalertx-dev
jokobsk/netalertx-dev
tags: |
type=raw,value=latest
type=raw,value=${{ steps.timestamp.outputs.version }}
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
@@ -72,7 +83,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v3
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6

View File

@@ -6,7 +6,6 @@
# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.
name: Publish Docker image
on:
@@ -14,6 +13,7 @@ on:
types: [published]
tags:
- '*.[1-9]+[0-9]?.[1-9]+*'
jobs:
docker:
runs-on: ubuntu-latest
@@ -21,6 +21,7 @@ jobs:
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -31,42 +32,39 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set up dynamic build ARGs
id: getargs
run: echo "version=$(cat ./stable/VERSION)" >> $GITHUB_OUTPUT
# --- Get release version from tag
- name: Get release version
id: get_version
run: echo "::set-output name=version::${GITHUB_REF#refs/tags/}"
run: echo "version=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
# --- Write version to .VERSION file
- name: Create .VERSION file
run: echo "${{ steps.get_version.outputs.version }}" >> .VERSION
run: echo "${{ steps.get_version.outputs.version }}" > .VERSION
# --- Generate Docker metadata and tags
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
# list of Docker images to use as base name for tags
images: |
ghcr.io/jokob-sk/netalertx
jokobsk/netalertx
# generate Docker tags based on the following events/attributes
jokobsk/netalertx
tags: |
type=semver,pattern={{version}},value=${{ inputs.version }}
type=semver,pattern={{major}}.{{minor}},value=${{ inputs.version }}
type=semver,pattern={{major}},value=${{ inputs.version }}
type=semver,pattern={{version}},value=${{ steps.get_version.outputs.version }}
type=semver,pattern={{major}}.{{minor}},value=${{ steps.get_version.outputs.version }}
type=semver,pattern={{major}},value=${{ steps.get_version.outputs.version }}
type=ref,event=branch,suffix=-{{ sha }}
type=ref,event=pr
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/') }}
- name: Log in to Github Container registry
- name: Log in to Github Container Registry (GHCR)
uses: docker/login-action@v3
with:
registry: ghcr.io
username: jokob-sk
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to DockerHub
- name: Log in to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
@@ -74,13 +72,12 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v3
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
# # ⚠ disable cache if build is failing to download debian packages
# cache-from: type=registry,ref=ghcr.io/jokob-sk/netalertx:buildcache
# cache-to: type=registry,ref=ghcr.io/jokob-sk/netalertx:buildcache,mode=max

View File

@@ -43,7 +43,7 @@ jobs:
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/jokob-sk/netalertx-dev-rewrite

11
.gitignore vendored
View File

@@ -1,6 +1,16 @@
.coverage
.vscode
.dotnet
.vscode-server
.gitconfig
.*CommandMarker
deviceid
.DS_Store
.cache
nohup.out
config/*
.ash_history
.VERSION
config/pialert.conf
config/app.conf
db/*
@@ -8,6 +18,7 @@ db/pialert.db
db/app.db
front/log/*
/log/*
/log/plugins/*
front/api/*
/api/*
**/plugins/**/*.log

2
.hadolint.yaml Normal file
View File

@@ -0,0 +1,2 @@
ignored:
- DL3018

8
.vscode/launch.json vendored
View File

@@ -29,6 +29,14 @@
"pathMappings": {
"/app": "${workspaceFolder}"
}
},
{
"name": "Python: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
}

20
.vscode/settings.json vendored
View File

@@ -10,4 +10,24 @@
"python.defaultInterpreterPath": "/opt/venv/bin/python",
// Let the Python extension invoke pytest via the interpreter; avoid hardcoded paths
// Removed python.testing.pytestPath and legacy pytest.command overrides
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh"
}
}
,
// Fallback for older VS Code versions or schema validators that don't accept custom profiles
"terminal.integrated.shell.linux": "/usr/bin/zsh"
,
"python.linting.flake8Enabled": true,
"python.linting.enabled": true,
"python.linting.flake8Args": [
"--config=.flake8"
],
"python.formatting.provider": "black",
"python.formatting.blackArgs": [
"--line-length=180"
]
}

290
.vscode/tasks.json vendored
View File

@@ -1,10 +1,223 @@
{
"version": "2.0.0",
"inputs": [
{
"id": "confirmPrune",
"type": "promptString",
"description": "DANGER! Type YES to confirm pruning all unused Docker resources. This will destroy containers, images, volumes, and networks!",
"default": ""
}
],
"tasks": [
{
"label": "Generate Dockerfile",
"label": "[Any POSIX] Generate Devcontainer Configs",
"type": "shell",
"command": "${workspaceFolder:NetAlertX}/.devcontainer/scripts/generate-dockerfile.sh",
"command": ".devcontainer/scripts/generate-configs.sh",
"detail": "Generates devcontainer configs from the template. This must be run after changes to devcontainer to combine/merge them into the final config used by VS Code. Note- this has no bearing on the production or test image.",
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"group": "POSIX Tasks"
},
"problemMatcher": [],
"group": {
"kind": "build",
"isDefault": false
},
"icon": {
"id": "tools",
"color": "terminal.ansiYellow"
}
},
{
"label": "[Any] Docker system and build Prune",
"type": "shell",
"command": ".devcontainer/scripts/confirm-docker-prune.sh",
"detail": "DANGER! Prunes all unused Docker resources (images, containers, volumes, networks). Any stopped container will be wiped and data will be lost. Use with caution.",
"options": {
"env": {
"CONFIRM_PRUNE": "${input:confirmPrune}"
}
},
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"group": "Any"
},
"problemMatcher": [],
"group": {
"kind": "build",
"isDefault": false
},
"icon": {
"id": "trash",
"color": "terminal.ansiRed"
}
},
{
"label": "[Dev Container] Re-Run Startup Script",
"type": "shell",
"command": "./isDevContainer.sh || exit 1;/workspaces/NetAlertX/.devcontainer/scripts/setup.sh",
"detail": "The startup script runs directly after the container is started. It reprovisions permissions, links folders, and performs other setup tasks. Run this if you have made changes to the setup script or need to reprovision the container.",
"options": {
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
},
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false
},
"problemMatcher": [],
"icon": {
"id": "beaker",
"color": "terminal.ansiBlue"
}
},
{
"label": "[Dev Container] Start Backend (Python)",
"type": "shell",
"command": "./isDevContainer.sh || exit 1; /services/start-backend.sh",
"detail": "Restarts the NetAlertX backend (Python) service in the dev container. This may take 5 seconds to be completely ready.",
"options": {
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
},
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"clear": false,
"group": "Devcontainer"
},
"problemMatcher": [],
"icon": {
"id": "debug-restart",
"color": "terminal.ansiGreen"
}
},
{
"label": "[Dev Container] Start CronD (Scheduler)",
"type": "shell",
"command": "./isDevContainer.sh || exit 1; /services/start-crond.sh",
"detail": "Stops and restarts the crond service.",
"options": {
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
},
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"clear": false,
"group": "Devcontainer"
},
"problemMatcher": [],
"icon": {
"id": "debug-restart",
"color": "terminal.ansiGreen"
}
},
{
"label": "[Dev Container] Start Frontend (nginx and PHP-FPM)",
"type": "shell",
"command": "./isDevContainer.sh || exit 1; /services/start-php-fpm.sh & /services/start-nginx.sh &",
"detail": "Stops and restarts the NetAlertX frontend services (nginx and PHP-FPM) in the dev container. This launches almost instantly.",
"options": {
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
},
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"clear": false,
"group": "Devcontainer"
},
"problemMatcher": [],
"icon": {
"id": "debug-restart",
"color": "terminal.ansiGreen"
}
},
{
"label": "[Dev Container] Stop Frontend & Backend Services",
"type": "shell",
"command": "./isDevContainer.sh || exit 1; pkill -f 'php-fpm83|nginx|crond|python3' || true",
"detail": "Stops all NetAlertX services running in the dev container.",
"options": {
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
},
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"group": "Devcontainer"
},
"problemMatcher": [],
"icon": {
"id": "debug-stop",
"color": "terminal.ansiRed"
}
},
{
"label": "[Any] Build Unit Test Docker image",
"type": "shell",
"command": "docker buildx build -t netalertx-test . && echo '🧪 Unit Test Docker image built: netalertx-test'",
"detail": "This must be run after changes to the container. Unit testing will not register changes until after this image is rebuilt. It takes about 30 seconds to build unless changes to the venv stage are made. venv takes 90s alone.",
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"group": "Any"
},
"problemMatcher": [],
"group": {
"kind": "build",
"isDefault": false
},
"icon": {
"id": "beaker",
"color": "terminal.ansiBlue"
}
},
{
"label": "[Dev Container] Wipe and Regenerate Database",
"type": "shell",
"command": "killall 'python3' || true && sleep 1 && rm -rf /data/db/* /data/config/* && bash /entrypoint.d/15-first-run-config.sh && bash /entrypoint.d/20-first-run-db.sh && echo '✅ Database and config wiped and regenerated'",
"detail": "Wipes devcontainer db and config. Provides a fresh start in devcontainer, run this task, then run the Rerun Startup Task",
"options": {},
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"group": "Devcontainer"
},
"problemMatcher": [],
"icon": {
"id": "database",
"color": "terminal.ansiRed"
}
},
{
"label": "Build & Launch Prodcution Docker Container",
"type": "shell",
"command": "docker compose up -d --build --force-recreate",
"detail": "Before launching, ensure VSCode Ports are closed and services are stopped. Tasks: Stop Frontend & Backend Services & Remote: Close Unused Forwarded Ports to ensure proper operation of the new container.",
"options": {
"cwd": "/workspaces/NetAlertX"
},
"presentation": {
"echo": true,
"reveal": "always",
@@ -16,79 +229,10 @@
"kind": "build",
"isDefault": false
},
"options": {
"cwd": "${workspaceFolder:NetAlertX}"
},
"icon": {
"id": "tools",
"color": "terminal.ansiYellow"
}
},
{
"label": "Re-Run Startup Script",
"type": "shell",
"command": "${workspaceFolder:NetAlertX}/.devcontainer/scripts/setup.sh",
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false
},
"problemMatcher": [],
"icon": {
"id": "beaker",
"id": "package",
"color": "terminal.ansiBlue"
}
},
{
"label": "Start Backend (Python)",
"type": "shell",
"command": "/workspaces/NetAlertX/.devcontainer/scripts/restart-backend.sh",
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"clear": false
},
"problemMatcher": [],
"icon": {
"id": "debug-restart",
"color": "terminal.ansiGreen"
}
},
{
"label": "Start Frontend (nginx and PHP-FPM)",
"type": "shell",
"command": "killall php-fpm83 nginx 2>/dev/null || true; sleep 1; php-fpm83 & nginx",
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false,
"clear": false
},
"problemMatcher": [],
"icon": {
"id": "debug-restart",
"color": "terminal.ansiGreen"
}
},
{
"label": "Stop Frontend & Backend Services",
"type": "shell",
"command": "pkill -f 'php-fpm83|nginx|crond|python3' || true",
"presentation": {
"echo": true,
"reveal": "always",
"panel": "shared",
"showReuseMessage": false
},
"problemMatcher": [],
"icon": {
"id": "debug-stop",
"color": "terminal.ansiRed"
}
}
]
}
}

View File

@@ -50,4 +50,4 @@ By participating, you agree to follow our [Code of Conduct](./CODE_OF_CONDUCT.md
If you have more in-depth questions or want to discuss contributing in other ways, feel free to reach out at:
📧 [jokob@duck.com](mailto:jokob@duck.com?subject=NetAlertX%20Contribution)
We appreciate every contribution, big or small! 💙
We appreciate every contribution, big or small! 💙

View File

@@ -1,63 +1,219 @@
# The NetAlertX Dockerfile has 3 stages:
#
# Stage 1. Builder - NetAlertX Requires special tools and packages to build our virtual environment, but
# which are not needed in future stages. We build the builder and extract the venv for runner to use as
# a base.
#
# Stage 2. Runner builds the bare minimum requirements to create an operational NetAlertX. The primary
# reason for breaking at this stage is it leaves the system in a proper state for devcontainer operation
# This image also provides a break-out point for uses who wish to execute the anti-pattern of using a
# docker container as a VM for experimentation and various development patterns.
#
# Stage 3. Hardened removes root, sudoers, folders, permissions, and locks the system down into a read-only
# compatible image. While NetAlertX does require some read-write operations, this image can guarantee the
# code pushed out by the project is the only code which will run on the system after each container restart.
# It reduces the chance of system hijacking and operates with all modern security protocols in place as is
# expected from a security appliance.
#
# This file can be built with `docker-compose -f docker-compose.yml up --build --force-recreate`
FROM alpine:3.22 AS builder
ARG INSTALL_DIR=/app
ENV PYTHONUNBUFFERED=1
ENV PATH="/opt/venv/bin:$PATH"
# Install build dependencies
COPY requirements.txt /tmp/requirements.txt
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \
&& python -m venv /opt/venv
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
# together makes for a slightly smaller image size.
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
chmod -R u-rwx,g-rwx /opt
COPY . ${INSTALL_DIR}/
RUN pip install openwrt-luci-rpc asusrouter asyncio aiohttp graphene flask flask-cors unifi-sm-api tplink-omada-client wakeonlan pycryptodome requests paho-mqtt scapy cron-converter pytz json2table dhcp-leases pyunifi speedtest-cli chardet python-nmap dnspython librouteros yattag git+https://github.com/foreign-sub/aiofreepybox.git \
&& bash -c "find ${INSTALL_DIR} -type d -exec chmod 750 {} \;" \
&& bash -c "find ${INSTALL_DIR} -type f -exec chmod 640 {} \;" \
&& bash -c "find ${INSTALL_DIR} -type f \( -name '*.sh' -o -name '*.py' -o -name 'speedtest-cli' \) -exec chmod 750 {} \;"
# Append Iliadbox certificate to aiofreepybox
RUN cat ${INSTALL_DIR}/install/freebox_certificate.pem >> /opt/venv/lib/python3.12/site-packages/aiofreepybox/freebox_certificates.pem
# second stage
# second stage is the main runtime stage with just the minimum required to run the application
# The runner is used for both devcontainer, and as a base for the hardened stage.
FROM alpine:3.22 AS runner
ARG INSTALL_DIR=/app
COPY --from=builder /opt/venv /opt/venv
COPY --from=builder /usr/sbin/usermod /usr/sbin/groupmod /usr/sbin/
# NetAlertX app directories
ENV NETALERTX_APP=${INSTALL_DIR}
ENV NETALERTX_DATA=/data
ENV NETALERTX_CONFIG=${NETALERTX_DATA}/config
ENV NETALERTX_FRONT=${NETALERTX_APP}/front
ENV NETALERTX_PLUGINS=${NETALERTX_FRONT}/plugins
ENV NETALERTX_SERVER=${NETALERTX_APP}/server
ENV NETALERTX_API=/tmp/api
ENV NETALERTX_DB=${NETALERTX_DATA}/db
ENV NETALERTX_DB_FILE=${NETALERTX_DB}/app.db
ENV NETALERTX_BACK=${NETALERTX_APP}/back
ENV NETALERTX_LOG=/tmp/log
ENV NETALERTX_PLUGINS_LOG=${NETALERTX_LOG}/plugins
ENV NETALERTX_CONFIG_FILE=${NETALERTX_CONFIG}/app.conf
# Enable venv
ENV PATH="/opt/venv/bin:$PATH"
# NetAlertX log files
ENV LOG_IP_CHANGES=${NETALERTX_LOG}/IP_changes.log
ENV LOG_APP=${NETALERTX_LOG}/app.log
ENV LOG_APP_FRONT=${NETALERTX_LOG}/app_front.log
ENV LOG_REPORT_OUTPUT_TXT=${NETALERTX_LOG}/report_output.txt
ENV LOG_DB_IS_LOCKED=${NETALERTX_LOG}/db_is_locked.log
ENV LOG_REPORT_OUTPUT_HTML=${NETALERTX_LOG}/report_output.html
ENV LOG_STDERR=${NETALERTX_LOG}/stderr.log
ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
# default port and listen address
ENV PORT=20211 LISTEN_ADDR=0.0.0.0
# System Services configuration files
ENV ENTRYPOINT_CHECKS=/entrypoint.d
ENV SYSTEM_SERVICES=/services
ENV SYSTEM_SERVICES_SCRIPTS=${SYSTEM_SERVICES}/scripts
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
ENV SYSTEM_NGINX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
ENV SYSTEM_NGINX_CONFIG_TEMPLATE=${SYSTEM_NGINX_CONFIG}/netalertx.conf.template
ENV SYSTEM_SERVICES_CONFIG_CRON=${SYSTEM_SERVICES_CONFIG}/cron
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
ENV SYSTEM_SERVICES_ACTIVE_CONFIG_FILE=${SYSTEM_SERVICES_ACTIVE_CONFIG}/nginx.conf
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
ENV SYSTEM_SERVICES_RUN=/tmp/run
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
ENV PHP_FPM_CONFIG_FILE=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.conf
ENV READ_ONLY_FOLDERS="${NETALERTX_BACK} ${NETALERTX_FRONT} ${NETALERTX_SERVER} ${SYSTEM_SERVICES} \
${SYSTEM_SERVICES_CONFIG} ${ENTRYPOINT_CHECKS}"
ENV READ_WRITE_FOLDERS="${NETALERTX_DATA} ${NETALERTX_CONFIG} ${NETALERTX_DB} ${NETALERTX_API} \
${NETALERTX_LOG} ${NETALERTX_PLUGINS_LOG} ${SYSTEM_SERVICES_RUN} \
${SYSTEM_SERVICES_RUN_TMP} ${SYSTEM_SERVICES_RUN_LOG} \
${SYSTEM_SERVICES_ACTIVE_CONFIG}"
# needed for s6-overlay
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
#Python environment
ENV PYTHONUNBUFFERED=1
ENV VIRTUAL_ENV=/opt/venv
ENV VIRTUAL_ENV_BIN=/opt/venv/bin
ENV PYTHONPATH=${NETALERTX_APP}:${NETALERTX_SERVER}:${NETALERTX_PLUGINS}:${VIRTUAL_ENV}/lib/python3.12/site-packages
ENV PATH="${SYSTEM_SERVICES}:${VIRTUAL_ENV_BIN}:$PATH"
# ❗ IMPORTANT - if you modify this file modify the /install/install_dependecies.sh file as well ❗
# App Environment
ENV LISTEN_ADDR=0.0.0.0
ENV PORT=20211
ENV NETALERTX_DEBUG=0
ENV VENDORSPATH=/app/back/ieee-oui.txt
ENV VENDORSPATH_NEWEST=${SYSTEM_SERVICES_RUN_TMP}/ieee-oui.txt
ENV ENVIRONMENT=alpine
ENV READ_ONLY_USER=readonly READ_ONLY_GROUP=readonly
ENV NETALERTX_USER=netalertx NETALERTX_GROUP=netalertx
ENV LANG=C.UTF-8
RUN apk update --no-cache \
&& apk add --no-cache bash libbsd zip lsblk gettext-envsubst sudo mtr tzdata s6-overlay \
&& apk add --no-cache curl arp-scan iproute2 iproute2-ss nmap nmap-scripts traceroute nbtscan avahi avahi-tools openrc dbus net-tools net-snmp-tools bind-tools awake ca-certificates \
&& apk add --no-cache sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session \
&& apk add --no-cache python3 nginx \
&& ln -s /usr/bin/awake /usr/bin/wakeonlan \
&& bash -c "install -d -m 750 -o nginx -g www-data ${INSTALL_DIR} ${INSTALL_DIR}" \
&& rm -f /etc/nginx/http.d/default.conf
COPY --from=builder --chown=nginx:www-data ${INSTALL_DIR}/ ${INSTALL_DIR}/
RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \
nmap-scripts traceroute nbtscan net-tools net-snmp-tools bind-tools awake ca-certificates \
sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session python3 envsubst \
nginx supercronic shadow && \
rm -Rf /var/cache/apk/* && \
rm -Rf /etc/nginx && \
addgroup -g 20211 ${NETALERTX_GROUP} && \
adduser -u 20211 -D -h ${NETALERTX_APP} -G ${NETALERTX_GROUP} ${NETALERTX_USER} && \
apk del shadow
# Add crontab file
COPY --chmod=600 --chown=root:root install/crontab /etc/crontabs/root
# Start all required services
RUN ${INSTALL_DIR}/dockerfiles/start.sh
HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=2 \
CMD curl -sf -o /dev/null ${LISTEN_ADDR}:${PORT}/php/server/query_json.php?file=app_state.json
# Install application, copy files, set permissions
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} install/production-filesystem/ /
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 back ${NETALERTX_BACK}
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 front ${NETALERTX_FRONT}
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 server ${NETALERTX_SERVER}
# Create required folders with correct ownership and permissions
RUN install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
sh -c "find ${NETALERTX_APP} -type f \( -name '*.sh' -o -name 'speedtest-cli' \) \
-exec chmod 750 {} \;"
# Copy version information into the image
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION
# Copy the virtualenv from the builder stage
COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
# Initialize each service with the dockerfiles/init-*.sh scripts, once.
# This is done after the copy of the venv to ensure the venv is in place
# although it may be quicker to do it before the copy, it keeps the image
# layers smaller to do it after.
RUN if [ -f '.VERSION' ]; then \
cp '.VERSION' "${NETALERTX_APP}/.VERSION"; \
else \
echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/.VERSION"; \
fi && \
chown 20212:20212 "${NETALERTX_APP}/.VERSION" && \
apk add --no-cache libcap && \
setcap cap_net_raw+ep /bin/busybox && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/arp-scan && \
setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip /usr/bin/nbtscan && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/traceroute && \
setcap cap_net_raw,cap_net_admin+eip "$(readlink -f ${VIRTUAL_ENV_BIN}/python)" && \
/bin/sh /build/init-nginx.sh && \
/bin/sh /build/init-php-fpm.sh && \
/bin/sh /build/init-cron.sh && \
/bin/sh /build/init-backend.sh && \
rm -rf /build && \
apk del libcap && \
date +%s > "${NETALERTX_FRONT}/buildtimestamp.txt"
ENTRYPOINT ["/bin/sh","/entrypoint.sh"]
# Final hardened stage to improve security by setting least possible permissions and removing sudo access.
# When complete, if the image is compromised, there's not much that can be done with it.
# This stage is separate from Runner stage so that devcontainer can use the Runner stage.
FROM runner AS hardened
ENV UMASK=0077
# Create readonly user and group with no shell access.
# Readonly user marks folders that are created by NetAlertX, but should not be modified.
# AI may claim this is stupid, but it's actually least possible permissions as
# read-only user cannot login, cannot sudo, has no write permission, and cannot even
# read the files it owns. The read-only user is ownership-as-a-lock hardening pattern.
RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \
adduser -u 20212 -G "${READ_ONLY_GROUP}" -D -h /app "${READ_ONLY_USER}"
# reduce permissions to minimum necessary for all NetAlertX files and folders
# Permissions 005 and 004 are not typos, they enable read-only. Everyone can
# read the read-only files, and nobody can write to them, even the readonly user.
# hadolint ignore=SC2114
RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
chmod -R 004 ${READ_ONLY_FOLDERS} && \
find ${READ_ONLY_FOLDERS} -type d -exec chmod 005 {} + && \
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
chown -R ${NETALERTX_USER}:${NETALERTX_GROUP} ${READ_WRITE_FOLDERS} && \
chmod -R 600 ${READ_WRITE_FOLDERS} && \
find ${READ_WRITE_FOLDERS} -type d -exec chmod 700 {} + && \
chown ${READ_ONLY_USER}:${READ_ONLY_GROUP} /entrypoint.sh /opt /opt/venv && \
chmod 005 /entrypoint.sh ${SYSTEM_SERVICES}/*.sh ${SYSTEM_SERVICES_SCRIPTS}/* ${ENTRYPOINT_CHECKS}/* /app /opt /opt/venv && \
for dir in ${READ_WRITE_FOLDERS}; do \
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 "$dir"; \
done && \
apk del apk-tools && \
rm -Rf /var /etc/sudoers.d/* /etc/shadow /etc/gshadow /etc/sudoers \
/lib/apk /lib/firmware /lib/modules-load.d /lib/sysctl.d /mnt /home/ /root \
/srv /media && \
sed -i "/^\(${READ_ONLY_USER}\|${NETALERTX_USER}\):/!d" /etc/passwd && \
sed -i "/^\(${READ_ONLY_GROUP}\|${NETALERTX_GROUP}\):/!d" /etc/group && \
printf '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
USER netalertx
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD /services/healthcheck.sh
ENTRYPOINT ["/init"]

View File

@@ -1,53 +1,176 @@
# Warning - use of this unhardened image is not recommended for production use.
# This image is provided for backward compatibility, development and testing purposes only.
# For production use, please use the hardened image built with Alpine. This image attempts to
# treat a container as an operating system, which is an anti-pattern and a common source of
# security issues.
#
# The default Dockerfile/docker-compose image contains the following security improvements
# over the Debian image:
# - read-only filesystem
# - no sudo access
# - least possible permissions on all files and folders
# - Root user has all permissions revoked and is unused
# - Secure umask applied so files are owner-only by default
# - non-privileged user runs the application
# - no shell access for non-privileged users
# - no unnecessary packages or services
# - reduced capabilities
# - tmpfs for writable folders
# - healthcheck
# - no package managers
# - no compilers or build tools
# - no systemd, uses lightweight init system
# - no persistent storage except for config and db volumes
# - minimal image size due to segmented build stages
# - minimal base image (Alpine Linux)
# - minimal python environment (venv, no pip)
# - minimal stripped web server
# - minimal stripped php environment
# - minimal services (nginx, php-fpm, crond, no unnecessary services or service managers)
# - minimal users and groups (netalertx and readonly only, no others)
# - minimal permissions (read-only for most files and folders, write-only for necessary folders)
# - minimal capabilities (NET_ADMIN and NET_RAW only, no others)
# - minimal environment variables (only necessary ones, no others)
# - minimal entrypoint (only necessary commands, no others)
# - Uses the same base image as the development environmnment (Alpine Linux)
# - Uses the same services as the development environment (nginx, php-fpm, crond)
# - Uses the same environment variables as the development environment (only necessary ones, no others)
# - Uses the same file and folder structure as the development environment (only necessary ones, no others)
# NetAlertX is designed to be run as an unattended network security monitoring appliance, which means it
# should be able to operate without human intervention. Overall, the hardened image is designed to be as
# secure as possible while still being functional and is recommended because you cannot attack a surface
# that isn't there.
FROM debian:bookworm-slim
# default UID and GID
ENV USER=pi USER_ID=1000 USER_GID=1000 PORT=20211
#TZ=Europe/London
# NetAlertX app directories
ENV INSTALL_DIR=/app
ENV NETALERTX_APP=${INSTALL_DIR}
ENV NETALERTX_DATA=/data
ENV NETALERTX_CONFIG=${NETALERTX_DATA}/config
ENV NETALERTX_FRONT=${NETALERTX_APP}/front
ENV NETALERTX_SERVER=${NETALERTX_APP}/server
ENV NETALERTX_API=/tmp/api
ENV NETALERTX_DB=${NETALERTX_DATA}/db
ENV NETALERTX_DB_FILE=${NETALERTX_DB}/app.db
ENV NETALERTX_BACK=${NETALERTX_APP}/back
ENV NETALERTX_LOG=/tmp/log
ENV NETALERTX_PLUGINS_LOG=${NETALERTX_LOG}/plugins
# NetAlertX log files
ENV LOG_IP_CHANGES=${NETALERTX_LOG}/IP_changes.log
ENV LOG_APP=${NETALERTX_LOG}/app.log
ENV LOG_APP_FRONT=${NETALERTX_LOG}/app_front.log
ENV LOG_REPORT_OUTPUT_TXT=${NETALERTX_LOG}/report_output.txt
ENV LOG_DB_IS_LOCKED=${NETALERTX_LOG}/db_is_locked.log
ENV LOG_REPORT_OUTPUT_HTML=${NETALERTX_LOG}/report_output.html
ENV LOG_STDERR=${NETALERTX_LOG}/stderr.log
ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
# System Services configuration files
ENV SYSTEM_SERVICES=/services
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
ENV SYSTEM_NGINIX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
ENV SYSTEM_NGINX_CONFIG_FILE=${SYSTEM_NGINIX_CONFIG}/nginx.conf
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
ENV NETALERTX_CONFIG_FILE=${NETALERTX_CONFIG}/app.conf
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
ENV SYSTEM_SERVICES_CROND=${SYSTEM_SERVICES_CONFIG}/crond
ENV SYSTEM_SERVICES_RUN=/tmp/run
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
ENV PHP_FPM_CONFIG_FILE=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.conf
#Python environment
ENV PYTHONPATH=${NETALERTX_SERVER}
ENV PYTHONUNBUFFERED=1
ENV VIRTUAL_ENV=/opt/venv
ENV VIRTUAL_ENV_BIN=/opt/venv/bin
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}:/services"
ENV VENDORSPATH=/app/back/ieee-oui.txt
ENV VENDORSPATH_NEWEST=${SYSTEM_SERVICES_RUN_TMP}/ieee-oui.txt
# App Environment
ENV LISTEN_ADDR=0.0.0.0
ENV PORT=20211
ENV NETALERTX_DEBUG=0
#Container environment
ENV ENVIRONMENT=debian
ENV USER=netalertx
ENV USER_ID=1000
ENV USER_GID=1000
# Todo, figure out why using a workdir instead of full paths don't work
# Todo, do we still need all these packages? I can already see sudo which isn't needed
RUN apt-get update
RUN apt-get install sudo -y
ARG INSTALL_DIR=/app
# create pi user and group
# add root and www-data to pi group so they can r/w files and db
RUN groupadd --gid "${USER_GID}" "${USER}" && \
useradd \
--uid ${USER_ID} \
--gid ${USER_GID} \
--create-home \
--shell /bin/bash \
${USER} && \
--uid ${USER_ID} \
--gid ${USER_GID} \
--create-home \
--shell /bin/bash \
${USER} && \
usermod -a -G ${USER_GID} root && \
usermod -a -G ${USER_GID} www-data
COPY --chmod=775 --chown=${USER_ID}:${USER_GID} install/production-filesystem/ /
COPY --chmod=775 --chown=${USER_ID}:${USER_GID} . ${INSTALL_DIR}/
# ❗ IMPORTANT - if you modify this file modify the /install/install_dependecies.debian.sh file as well ❗
# hadolint ignore=DL3008,DL3027
RUN apt-get update && apt-get install -y --no-install-recommends \
tini snmp ca-certificates curl libwww-perl arp-scan sudo gettext-base \
nginx-light php php-cgi php-fpm php-sqlite3 php-curl sqlite3 dnsutils net-tools \
python3 python3-dev iproute2 nmap python3-pip zip git systemctl usbutils traceroute nbtscan openrc \
busybox nginx nginx-core mtr python3-venv && \
rm -rf /var/lib/apt/lists/*
RUN apt-get install -y \
tini snmp ca-certificates curl libwww-perl arp-scan perl apt-utils cron sudo \
nginx-light php php-cgi php-fpm php-sqlite3 php-curl sqlite3 dnsutils net-tools php-openssl \
python3 python3-dev iproute2 nmap python3-pip zip systemctl usbutils traceroute nbtscan avahi avahi-tools openrc dbus
# Alternate dependencies
RUN apt-get install nginx nginx-core mtr php-fpm php8.2-fpm php-cli php8.2 php8.2-sqlite3 -y
RUN phpenmod -v 8.2 sqlite3
# While php8.3 is in debian bookworm repos, php-fpm is not included so we need to add sury.org repo
# (Ondřej Surý maintains php packages for debian. This is temp until debian includes php-fpm in their
# repos. Likely it will be in Debian Trixie.). This keeps the image up-to-date with the alpine version.
# hadolint ignore=DL3008
RUN apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
lsb-release \
wget && \
wget -q -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg && \
echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list && \
apt-get update && \
apt-get install -y --no-install-recommends php8.3-fpm php8.3-cli php8.3-sqlite3 php8.3-common php8.3-curl php8.3-cgi && \
ln -s /usr/sbin/php-fpm8.3 /usr/sbin/php-fpm83 && \
rm -rf /var/lib/apt/lists/* # make it compatible with alpine version
# Setup virtual python environment and use pip3 to install packages
RUN apt-get install -y python3-venv
RUN python3 -m venv myenv
RUN python3 -m venv ${VIRTUAL_ENV} && \
/bin/bash -c "source ${VIRTUAL_ENV_BIN}/activate && update-alternatives --install /usr/bin/python python /usr/bin/python3 10 && pip3 install -r ${INSTALL_DIR}/requirements.txt"
# Configure php-fpm
RUN chmod -R 755 /services && \
chown -R ${USER}:${USER_GID} /services && \
sed -i 's/^;listen.mode = .*/listen.mode = 0666/' ${SYSTEM_SERVICES_PHP_FPM_D}/www.conf && \
printf "user = %s\ngroup = %s\n" "${USER}" "${USER_GID}" >> /services/config/php/php-fpm.d/www.conf
RUN /bin/bash -c "source myenv/bin/activate && update-alternatives --install /usr/bin/python python /usr/bin/python3 10 && pip3 install openwrt-luci-rpc asusrouter asyncio aiohttp graphene flask flask-cors unifi-sm-api tplink-omada-client wakeonlan pycryptodome requests paho-mqtt scapy cron-converter pytz json2table dhcp-leases pyunifi speedtest-cli chardet python-nmap dnspython librouteros yattag "
# Create a buildtimestamp.txt to later check if a new version was released
RUN date +%s > ${INSTALL_DIR}/front/buildtimestamp.txt
CMD ["${INSTALL_DIR}/install/start.debian.sh"]
USER netalertx:netalertx
ENTRYPOINT ["/bin/bash","/entrypoint.sh"]

108
README.md
View File

@@ -6,38 +6,60 @@
# NetAlertX - Network, presence scanner and alert framework
Get visibility of what's going on on your WIFI/LAN network and enable presence detection of important devices. Schedule scans for devices, port changes and get alerts if unknown devices or changes are found. Write your own [Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) with auto-generated UI and in-build notification system. Build out and easily maintain your network source of truth (NSoT).
Get visibility of what's going on on your WIFI/LAN network and enable presence detection of important devices. Schedule scans for devices, port changes and get alerts if unknown devices or changes are found. Write your own [Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) with auto-generated UI and in-build notification system. Build out and easily maintain your network source of truth (NSoT) and device inventory.
## 📋 Table of Contents
- [Features](#-features)
- [Documentation](#-documentation)
- [Quick Start](#-quick-start)
- [Alternative Apps](#-other-alternative-apps)
- [Security & Privacy](#-security--privacy)
- [FAQ](#-faq)
- [Known Issues](#-known-issues)
- [Donations](#-donations)
- [Contributors](#-contributors)
- [Translations](#-translations)
- [License](#license)
- [NetAlertX - Network, presence scanner and alert framework](#netalertx---network-presence-scanner-and-alert-framework)
- [📋 Table of Contents](#-table-of-contents)
- [🚀 Quick Start](#-quick-start)
- [📦 Features](#-features)
- [Scanners](#scanners)
- [Notification gateways](#notification-gateways)
- [Integrations and Plugins](#integrations-and-plugins)
- [Workflows](#workflows)
- [📚 Documentation](#-documentation)
- [🔐 Security \& Privacy](#-security--privacy)
- [❓ FAQ](#-faq)
- [🐞 Known Issues](#-known-issues)
- [📃 Everything else](#-everything-else)
- [📧 Get notified what's new](#-get-notified-whats-new)
- [🔀 Other Alternative Apps](#-other-alternative-apps)
- [💙 Donations](#-donations)
- [🏗 Contributors](#-contributors)
- [🌍 Translations](#-translations)
- [License](#license)
## 🚀 Quick Start
> [!WARNING]
> ⚠️ **Important:** The docker-compose has recently changed. Carefully read the [Migration guide](https://jokob-sk.github.io/NetAlertX/MIGRATION/?h=migrat#12-migration-from-netalertx-v25524) for detailed instructions.
Start NetAlertX in seconds with Docker:
```bash
docker run -d --rm --network=host \
-v local_path/config:/app/config \
-v local_path/db:/app/db \
--mount type=tmpfs,target=/app/api \
-e PUID=200 -e PGID=300 \
-e TZ=Europe/Berlin \
docker run -d \
--network=host \
--restart unless-stopped \
-v /local_data_dir:/data \
-v /etc/localtime:/etc/localtime:ro \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
-e PORT=20211 \
-e APP_CONF_OVERRIDE='{"GRAPHQL_PORT":"20214"}' \
ghcr.io/jokob-sk/netalertx:latest
```
Note: Your `/local_data_dir` should contain a `config` and `db` folder.
To deploy a containerized instance directly from the source repository, execute the following BASH sequence:
```bash
git clone https://github.com/jokob-sk/NetAlertX.git
cd NetAlertX
docker compose up --force-recreate --build
# To customize: edit docker-compose.yaml and run that last command again
```
Need help configuring it? Check the [usage guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/README.md) or [full documentation](https://jokob-sk.github.io/NetAlertX/).
For Home Assistant users: [Click here to add NetAlertX](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
@@ -45,10 +67,10 @@ For Home Assistant users: [Click here to add NetAlertX](https://my.home-assistan
For other install methods, check the [installation docs](#-documentation)
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
![showcase][showcase]
![showcase][showcase]
<details>
<summary>📷 Click for more screenshots</summary>
@@ -66,15 +88,15 @@ For other install methods, check the [installation docs](#-documentation)
### Scanners
The app scans your network for **New devices**, **New connections** (re-connections), **Disconnections**, **"Always Connected" devices down**, Devices **IP changes** and **Internet IP address changes**. Discovery & scan methods include: **arp-scan**, **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**, **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) docs for a full list of avaliable plugins.
The app scans your network for **New devices**, **New connections** (re-connections), **Disconnections**, **"Always Connected" devices down**, Devices **IP changes** and **Internet IP address changes**. Discovery & scan methods include: **arp-scan**, **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**, **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) docs for a full list of avaliable plugins.
### Notification gateways
Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use native [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/) publishers.
Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use native [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/) publishers.
### Integrations and Plugins
Feed your data and device changes into [Home Assistant](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HOME_ASSISTANT.md), read [API endpoints](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md), or use [Webhooks](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEBHOOK_N8N.md) to setup custom automation flows. You can also
Feed your data and device changes into [Home Assistant](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HOME_ASSISTANT.md), read [API endpoints](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md), or use [Webhooks](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEBHOOK_N8N.md) to setup custom automation flows. You can also
build your own scanners with the [Plugin system](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) in as little as [15 minutes](https://www.youtube.com/watch?v=cdbxlwiWhv8).
### Workflows
@@ -87,10 +109,10 @@ The [workflows module](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WORK
Supported browsers: Chrome, Firefox
- [[Installation] Docker](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md)
- [[Installation] Home Assistant](https://github.com/alexbelgium/hassio-addons/tree/master/netalertx)
- [[Installation] Bare metal](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md)
- [[Installation] Unraid App](https://unraid.net/community/apps)
- [[Installation] Docker](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md)
- [[Installation] Home Assistant](https://github.com/alexbelgium/hassio-addons/tree/master/netalertx)
- [[Installation] Bare metal](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md)
- [[Installation] Unraid App](https://unraid.net/community/apps)
- [[Setup] Usage and Configuration](https://github.com/jokob-sk/NetAlertX/blob/main/docs/README.md)
- [[Development] API docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md)
- [[Development] Custom Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS_DEV.md)
@@ -111,20 +133,20 @@ See [Security Best Practices](https://github.com/jokob-sk/NetAlertX/security) fo
## ❓ FAQ
**Q: Why dont I see any devices?**
**Q: Why dont I see any devices?**
A: Ensure the container has proper network access (e.g., use `--network host` on Linux). Also check that your scan method is properly configured in the UI.
**Q: Does this work on Wi-Fi-only devices like Raspberry Pi?**
**Q: Does this work on Wi-Fi-only devices like Raspberry Pi?**
A: Yes, but some scanners (e.g. ARP) work best on Ethernet. For Wi-Fi, try SNMP, DHCP, or Pi-hole import.
**Q: Will this send any data to the internet?**
**Q: Will this send any data to the internet?**
A: No. All scans and data remain local, unless you set up cloud-based notifications.
**Q: Can I use this without Docker?**
**Q: Can I use this without Docker?**
A: Yes! You can install it bare-metal. See the [bare metal installation guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md).
**Q: Where is the data stored?**
A: In the `/config` and `/db` folders, mapped in Docker. Back up these folders regularly.
**Q: Where is the data stored?**
A: In the `/data/config` and `/data/db` folders. Back up these folders regularly.
## 🐞 Known Issues
@@ -141,9 +163,9 @@ Check the [GitHub Issues](https://github.com/jokob-sk/NetAlertX/issues) for the
### 📧 Get notified what's new
Get notified about a new release, what new functionality you can use and about breaking changes.
Get notified about a new release, what new functionality you can use and about breaking changes.
![Follow and star][follow_star]
![Follow and star][follow_star]
### 🔀 Other Alternative Apps
@@ -154,15 +176,15 @@ Get notified about a new release, what new functionality you can use and about b
### 💙 Donations
Thank you to everyone who appreciates this tool and donates.
Thank you to everyone who appreciates this tool and donates.
<details>
<summary>Click for more ways to donate</summary>
<hr>
| [![GitHub](https://i.imgur.com/emsRCPh.png)](https://github.com/sponsors/jokob-sk) | [![Buy Me A Coffee](https://i.imgur.com/pIM6YXL.png)](https://www.buymeacoffee.com/jokobsk) | [![Patreon](https://i.imgur.com/MuYsrq1.png)](https://www.patreon.com/user?u=84385063) |
| --- | --- | --- |
| [![GitHub](https://i.imgur.com/emsRCPh.png)](https://github.com/sponsors/jokob-sk) | [![Buy Me A Coffee](https://i.imgur.com/pIM6YXL.png)](https://www.buymeacoffee.com/jokobsk) | [![Patreon](https://i.imgur.com/MuYsrq1.png)](https://www.patreon.com/user?u=84385063) |
| --- | --- | --- |
- Bitcoin: `1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM`
- Ethereum: `0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7`
@@ -173,11 +195,11 @@ Thank you to everyone who appreciates this tool and donates.
### 🏗 Contributors
This project would be nothing without the amazing work of the community, with special thanks to:
This project would be nothing without the amazing work of the community, with special thanks to:
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up), [vladaurosh](https://github.com/vladaurosh) for Alpine re-base help, [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few. Check out all the [amazing contributors](https://github.com/jokob-sk/NetAlertX/graphs/contributors).
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up), [vladaurosh](https://github.com/vladaurosh) for Alpine re-base help, [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few. Check out all the [amazing contributors](https://github.com/jokob-sk/NetAlertX/graphs/contributors).
### 🌍 Translations
### 🌍 Translations
Proudly using [Weblate](https://hosted.weblate.org/projects/pialert/). Help out and suggest languages in the [online portal of Weblate](https://hosted.weblate.org/projects/pialert/core/).

1
api Symbolic link
View File

@@ -0,0 +1 @@
/tmp/api

411
back/app.sql Executable file
View File

@@ -0,0 +1,411 @@
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE Events (eve_MAC STRING (50) NOT NULL COLLATE NOCASE, eve_IP STRING (50) NOT NULL COLLATE NOCASE, eve_DateTime DATETIME NOT NULL, eve_EventType STRING (30) NOT NULL COLLATE NOCASE, eve_AdditionalInfo STRING (250) DEFAULT (''), eve_PendingAlertEmail BOOLEAN NOT NULL CHECK (eve_PendingAlertEmail IN (0, 1)) DEFAULT (1), eve_PairEventRowid INTEGER);
CREATE TABLE Sessions (ses_MAC STRING (50) COLLATE NOCASE, ses_IP STRING (50) COLLATE NOCASE, ses_EventTypeConnection STRING (30) COLLATE NOCASE, ses_DateTimeConnection DATETIME, ses_EventTypeDisconnection STRING (30) COLLATE NOCASE, ses_DateTimeDisconnection DATETIME, ses_StillConnected BOOLEAN, ses_AdditionalInfo STRING (250));
CREATE TABLE IF NOT EXISTS "Online_History" (
"Index" INTEGER,
"Scan_Date" TEXT,
"Online_Devices" INTEGER,
"Down_Devices" INTEGER,
"All_Devices" INTEGER,
"Archived_Devices" INTEGER,
"Offline_Devices" INTEGER,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE sqlite_sequence(name,seq);
CREATE TABLE Devices (
devMac STRING (50) PRIMARY KEY NOT NULL COLLATE NOCASE,
devName STRING (50) NOT NULL DEFAULT "(unknown)",
devOwner STRING (30) DEFAULT "(unknown)" NOT NULL,
devType STRING (30),
devVendor STRING (250),
devFavorite BOOLEAN CHECK (devFavorite IN (0, 1)) DEFAULT (0) NOT NULL,
devGroup STRING (10),
devComments TEXT,
devFirstConnection DATETIME NOT NULL,
devLastConnection DATETIME NOT NULL,
devLastIP STRING (50) NOT NULL COLLATE NOCASE,
devStaticIP BOOLEAN DEFAULT (0) NOT NULL CHECK (devStaticIP IN (0, 1)),
devScan INTEGER DEFAULT (1) NOT NULL,
devLogEvents BOOLEAN NOT NULL DEFAULT (1) CHECK (devLogEvents IN (0, 1)),
devAlertEvents BOOLEAN NOT NULL DEFAULT (1) CHECK (devAlertEvents IN (0, 1)),
devAlertDown BOOLEAN NOT NULL DEFAULT (0) CHECK (devAlertDown IN (0, 1)),
devSkipRepeated INTEGER DEFAULT 0 NOT NULL,
devLastNotification DATETIME,
devPresentLastScan BOOLEAN NOT NULL DEFAULT (0) CHECK (devPresentLastScan IN (0, 1)),
devIsNew BOOLEAN NOT NULL DEFAULT (1) CHECK (devIsNew IN (0, 1)),
devLocation STRING (250) COLLATE NOCASE,
devIsArchived BOOLEAN NOT NULL DEFAULT (0) CHECK (devIsArchived IN (0, 1)),
devParentMAC TEXT,
devParentPort INTEGER,
devIcon TEXT,
devGUID TEXT,
devSite TEXT,
devSSID TEXT,
devSyncHubNode TEXT,
devSourcePlugin TEXT
, "devCustomProps" TEXT);
CREATE TABLE IF NOT EXISTS "Settings" (
"setKey" TEXT,
"setName" TEXT,
"setDescription" TEXT,
"setType" TEXT,
"setOptions" TEXT,
"setGroup" TEXT,
"setValue" TEXT,
"setEvents" TEXT,
"setOverriddenByEnv" INTEGER
);
CREATE TABLE IF NOT EXISTS "Parameters" (
"par_ID" TEXT PRIMARY KEY,
"par_Value" TEXT
);
CREATE TABLE Plugins_Objects(
"Index" INTEGER,
Plugin TEXT NOT NULL,
Object_PrimaryID TEXT NOT NULL,
Object_SecondaryID TEXT NOT NULL,
DateTimeCreated TEXT NOT NULL,
DateTimeChanged TEXT NOT NULL,
Watched_Value1 TEXT NOT NULL,
Watched_Value2 TEXT NOT NULL,
Watched_Value3 TEXT NOT NULL,
Watched_Value4 TEXT NOT NULL,
Status TEXT NOT NULL,
Extra TEXT NOT NULL,
UserData TEXT NOT NULL,
ForeignKey TEXT NOT NULL,
SyncHubNodeName TEXT,
"HelpVal1" TEXT,
"HelpVal2" TEXT,
"HelpVal3" TEXT,
"HelpVal4" TEXT,
ObjectGUID TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE Plugins_Events(
"Index" INTEGER,
Plugin TEXT NOT NULL,
Object_PrimaryID TEXT NOT NULL,
Object_SecondaryID TEXT NOT NULL,
DateTimeCreated TEXT NOT NULL,
DateTimeChanged TEXT NOT NULL,
Watched_Value1 TEXT NOT NULL,
Watched_Value2 TEXT NOT NULL,
Watched_Value3 TEXT NOT NULL,
Watched_Value4 TEXT NOT NULL,
Status TEXT NOT NULL,
Extra TEXT NOT NULL,
UserData TEXT NOT NULL,
ForeignKey TEXT NOT NULL,
SyncHubNodeName TEXT,
"HelpVal1" TEXT,
"HelpVal2" TEXT,
"HelpVal3" TEXT,
"HelpVal4" TEXT, "ObjectGUID" TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE Plugins_History(
"Index" INTEGER,
Plugin TEXT NOT NULL,
Object_PrimaryID TEXT NOT NULL,
Object_SecondaryID TEXT NOT NULL,
DateTimeCreated TEXT NOT NULL,
DateTimeChanged TEXT NOT NULL,
Watched_Value1 TEXT NOT NULL,
Watched_Value2 TEXT NOT NULL,
Watched_Value3 TEXT NOT NULL,
Watched_Value4 TEXT NOT NULL,
Status TEXT NOT NULL,
Extra TEXT NOT NULL,
UserData TEXT NOT NULL,
ForeignKey TEXT NOT NULL,
SyncHubNodeName TEXT,
"HelpVal1" TEXT,
"HelpVal2" TEXT,
"HelpVal3" TEXT,
"HelpVal4" TEXT, "ObjectGUID" TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE Plugins_Language_Strings(
"Index" INTEGER,
Language_Code TEXT NOT NULL,
String_Key TEXT NOT NULL,
String_Value TEXT NOT NULL,
Extra TEXT NOT NULL,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE CurrentScan (
cur_MAC STRING(50) NOT NULL COLLATE NOCASE,
cur_IP STRING(50) NOT NULL COLLATE NOCASE,
cur_Vendor STRING(250),
cur_ScanMethod STRING(10),
cur_Name STRING(250),
cur_LastQuery STRING(250),
cur_DateTime STRING(250),
cur_SyncHubNodeName STRING(50),
cur_NetworkSite STRING(250),
cur_SSID STRING(250),
cur_NetworkNodeMAC STRING(250),
cur_PORT STRING(250),
cur_Type STRING(250),
UNIQUE(cur_MAC)
);
CREATE TABLE IF NOT EXISTS "AppEvents" (
"Index" INTEGER PRIMARY KEY AUTOINCREMENT,
"GUID" TEXT UNIQUE,
"AppEventProcessed" BOOLEAN,
"DateTimeCreated" TEXT,
"ObjectType" TEXT,
"ObjectGUID" TEXT,
"ObjectPlugin" TEXT,
"ObjectPrimaryID" TEXT,
"ObjectSecondaryID" TEXT,
"ObjectForeignKey" TEXT,
"ObjectIndex" TEXT,
"ObjectIsNew" BOOLEAN,
"ObjectIsArchived" BOOLEAN,
"ObjectStatusColumn" TEXT,
"ObjectStatus" TEXT,
"AppEventType" TEXT,
"Helper1" TEXT,
"Helper2" TEXT,
"Helper3" TEXT,
"Extra" TEXT
);
CREATE TABLE IF NOT EXISTS "Notifications" (
"Index" INTEGER,
"GUID" TEXT UNIQUE,
"DateTimeCreated" TEXT,
"DateTimePushed" TEXT,
"Status" TEXT,
"JSON" TEXT,
"Text" TEXT,
"HTML" TEXT,
"PublishedVia" TEXT,
"Extra" TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE INDEX IDX_eve_DateTime ON Events (eve_DateTime);
CREATE INDEX IDX_eve_EventType ON Events (eve_EventType COLLATE NOCASE);
CREATE INDEX IDX_eve_MAC ON Events (eve_MAC COLLATE NOCASE);
CREATE INDEX IDX_eve_PairEventRowid ON Events (eve_PairEventRowid);
CREATE INDEX IDX_ses_EventTypeDisconnection ON Sessions (ses_EventTypeDisconnection COLLATE NOCASE);
CREATE INDEX IDX_ses_EventTypeConnection ON Sessions (ses_EventTypeConnection COLLATE NOCASE);
CREATE INDEX IDX_ses_DateTimeDisconnection ON Sessions (ses_DateTimeDisconnection);
CREATE INDEX IDX_ses_MAC ON Sessions (ses_MAC COLLATE NOCASE);
CREATE INDEX IDX_ses_DateTimeConnection ON Sessions (ses_DateTimeConnection);
CREATE INDEX IDX_dev_PresentLastScan ON Devices (devPresentLastScan);
CREATE INDEX IDX_dev_FirstConnection ON Devices (devFirstConnection);
CREATE INDEX IDX_dev_AlertDeviceDown ON Devices (devAlertDown);
CREATE INDEX IDX_dev_StaticIP ON Devices (devStaticIP);
CREATE INDEX IDX_dev_ScanCycle ON Devices (devScan);
CREATE INDEX IDX_dev_Favorite ON Devices (devFavorite);
CREATE INDEX IDX_dev_LastIP ON Devices (devLastIP);
CREATE INDEX IDX_dev_NewDevice ON Devices (devIsNew);
CREATE INDEX IDX_dev_Archived ON Devices (devIsArchived);
CREATE VIEW Events_Devices AS
SELECT *
FROM Events
LEFT JOIN Devices ON eve_MAC = devMac
/* Events_Devices(eve_MAC,eve_IP,eve_DateTime,eve_EventType,eve_AdditionalInfo,eve_PendingAlertEmail,eve_PairEventRowid,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps) */;
CREATE VIEW LatestEventsPerMAC AS
WITH RankedEvents AS (
SELECT
e.*,
ROW_NUMBER() OVER (PARTITION BY e.eve_MAC ORDER BY e.eve_DateTime DESC) AS row_num
FROM Events AS e
)
SELECT
e.*,
d.*,
c.*
FROM RankedEvents AS e
LEFT JOIN Devices AS d ON e.eve_MAC = d.devMac
INNER JOIN CurrentScan AS c ON e.eve_MAC = c.cur_MAC
WHERE e.row_num = 1
/* LatestEventsPerMAC(eve_MAC,eve_IP,eve_DateTime,eve_EventType,eve_AdditionalInfo,eve_PendingAlertEmail,eve_PairEventRowid,row_num,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps,cur_MAC,cur_IP,cur_Vendor,cur_ScanMethod,cur_Name,cur_LastQuery,cur_DateTime,cur_SyncHubNodeName,cur_NetworkSite,cur_SSID,cur_NetworkNodeMAC,cur_PORT,cur_Type) */;
CREATE VIEW Sessions_Devices AS SELECT * FROM Sessions LEFT JOIN "Devices" ON ses_MAC = devMac
/* Sessions_Devices(ses_MAC,ses_IP,ses_EventTypeConnection,ses_DateTimeConnection,ses_EventTypeDisconnection,ses_DateTimeDisconnection,ses_StillConnected,ses_AdditionalInfo,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps) */;
CREATE VIEW Convert_Events_to_Sessions AS SELECT EVE1.eve_MAC,
EVE1.eve_IP,
EVE1.eve_EventType AS eve_EventTypeConnection,
EVE1.eve_DateTime AS eve_DateTimeConnection,
CASE WHEN EVE2.eve_EventType IN ('Disconnected', 'Device Down') OR
EVE2.eve_EventType IS NULL THEN EVE2.eve_EventType ELSE '<missing event>' END AS eve_EventTypeDisconnection,
CASE WHEN EVE2.eve_EventType IN ('Disconnected', 'Device Down') THEN EVE2.eve_DateTime ELSE NULL END AS eve_DateTimeDisconnection,
CASE WHEN EVE2.eve_EventType IS NULL THEN 1 ELSE 0 END AS eve_StillConnected,
EVE1.eve_AdditionalInfo
FROM Events AS EVE1
LEFT JOIN
Events AS EVE2 ON EVE1.eve_PairEventRowID = EVE2.RowID
WHERE EVE1.eve_EventType IN ('New Device', 'Connected','Down Reconnected')
UNION
SELECT eve_MAC,
eve_IP,
'<missing event>' AS eve_EventTypeConnection,
NULL AS eve_DateTimeConnection,
eve_EventType AS eve_EventTypeDisconnection,
eve_DateTime AS eve_DateTimeDisconnection,
0 AS eve_StillConnected,
eve_AdditionalInfo
FROM Events AS EVE1
WHERE (eve_EventType = 'Device Down' OR
eve_EventType = 'Disconnected') AND
EVE1.eve_PairEventRowID IS NULL
/* Convert_Events_to_Sessions(eve_MAC,eve_IP,eve_EventTypeConnection,eve_DateTimeConnection,eve_EventTypeDisconnection,eve_DateTimeDisconnection,eve_StillConnected,eve_AdditionalInfo) */;
CREATE TRIGGER "trg_insert_devices"
AFTER INSERT ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = NEW.devGUID
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'insert'
)
BEGIN
INSERT INTO "AppEvents" (
"GUID",
"DateTimeCreated",
"AppEventProcessed",
"ObjectType",
"ObjectGUID",
"ObjectPrimaryID",
"ObjectSecondaryID",
"ObjectStatus",
"ObjectStatusColumn",
"ObjectIsNew",
"ObjectIsArchived",
"ObjectForeignKey",
"ObjectPlugin",
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
NEW.devGUID, -- ObjectGUID
NEW.devMac, -- ObjectPrimaryID
NEW.devLastIP, -- ObjectSecondaryID
CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
'devPresentLastScan', -- ObjectStatusColumn
NEW.devIsNew, -- ObjectIsNew
NEW.devIsArchived, -- ObjectIsArchived
NEW.devGUID, -- ObjectForeignKey
'DEVICES', -- ObjectForeignKey
'insert'
);
END;
CREATE TRIGGER "trg_update_devices"
AFTER UPDATE ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = NEW.devGUID
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'update'
)
BEGIN
INSERT INTO "AppEvents" (
"GUID",
"DateTimeCreated",
"AppEventProcessed",
"ObjectType",
"ObjectGUID",
"ObjectPrimaryID",
"ObjectSecondaryID",
"ObjectStatus",
"ObjectStatusColumn",
"ObjectIsNew",
"ObjectIsArchived",
"ObjectForeignKey",
"ObjectPlugin",
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
NEW.devGUID, -- ObjectGUID
NEW.devMac, -- ObjectPrimaryID
NEW.devLastIP, -- ObjectSecondaryID
CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
'devPresentLastScan', -- ObjectStatusColumn
NEW.devIsNew, -- ObjectIsNew
NEW.devIsArchived, -- ObjectIsArchived
NEW.devGUID, -- ObjectForeignKey
'DEVICES', -- ObjectForeignKey
'update'
);
END;
CREATE TRIGGER "trg_delete_devices"
AFTER DELETE ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = OLD.devGUID
AND ObjectStatus = CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'delete'
)
BEGIN
INSERT INTO "AppEvents" (
"GUID",
"DateTimeCreated",
"AppEventProcessed",
"ObjectType",
"ObjectGUID",
"ObjectPrimaryID",
"ObjectSecondaryID",
"ObjectStatus",
"ObjectStatusColumn",
"ObjectIsNew",
"ObjectIsArchived",
"ObjectForeignKey",
"ObjectPlugin",
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
OLD.devGUID, -- ObjectGUID
OLD.devMac, -- ObjectPrimaryID
OLD.devLastIP, -- ObjectSecondaryID
CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
'devPresentLastScan', -- ObjectStatusColumn
OLD.devIsNew, -- ObjectIsNew
OLD.devIsArchived, -- ObjectIsArchived
OLD.devGUID, -- ObjectForeignKey
'DEVICES', -- ObjectForeignKey
'delete'
);
END;

View File

@@ -1,14 +1,17 @@
#!/bin/bash
export INSTALL_DIR=/app
LOG_FILE="${INSTALL_DIR}/log/execution_queue.log"
# Check if there are any entries with cron_restart_backend
if grep -q "cron_restart_backend" "$LOG_FILE"; then
# Restart python application using s6
s6-svc -r /var/run/s6-rc/servicedirs/netalertx
echo 'done'
if [ -f "${LOG_EXECUTION_QUEUE}" ] && grep -q "cron_restart_backend" "${LOG_EXECUTION_QUEUE}"; then
echo "$(date): Restarting backend triggered by cron_restart_backend"
killall python3 || echo "killall python3 failed or no process found"
sleep 2
/services/start-backend.sh &
# Remove all lines containing cron_restart_backend from the log file
sed -i '/cron_restart_backend/d' "$LOG_FILE"
# Atomic replacement with temp file. grep returns 1 if no lines selected (file becomes empty), which is valid here.
grep -v "cron_restart_backend" "${LOG_EXECUTION_QUEUE}" > "${LOG_EXECUTION_QUEUE}.tmp"
RC=$?
if [ $RC -eq 0 ] || [ $RC -eq 1 ]; then
mv "${LOG_EXECUTION_QUEUE}.tmp" "${LOG_EXECUTION_QUEUE}"
fi
fi

111367
back/ieee-oui.txt Executable file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

2
db/.gitignore vendored
View File

@@ -1,2 +0,0 @@
*
!.gitignore

View File

@@ -1,82 +1,75 @@
services:
netalertx:
privileged: true
#use an environmental variable to set host networking mode if needed
network_mode: ${NETALERTX_NETWORK_MODE:-host} # Use host networking for ARP scanning and other services
build:
dockerfile: Dockerfile
context: .
cache_from:
- type=registry,ref=docker.io/jokob-sk/netalertx:buildcache
container_name: netalertx
network_mode: host
# restart: unless-stopped
context: . # Build context is the current directory
dockerfile: Dockerfile # Specify the Dockerfile to use
image: netalertx:latest
container_name: netalertx # The name when you docker contiainer ls
read_only: true # Make the container filesystem read-only
cap_drop: # Drop all capabilities for enhanced security
- ALL
cap_add: # Add only the necessary capabilities
- NET_ADMIN # Required for ARP scanning
- NET_RAW # Required for raw socket operations
- NET_BIND_SERVICE # Required to bind to privileged ports (nbtscan)
volumes:
# - ${APP_DATA_LOCATION}/netalertx_dev/config:/app/config
- ${APP_DATA_LOCATION}/netalertx/config:/app/config
# - ${APP_DATA_LOCATION}/netalertx_dev/db:/app/db
- ${APP_DATA_LOCATION}/netalertx/db:/app/db
# (optional) useful for debugging if you have issues setting up the container
- ${APP_DATA_LOCATION}/netalertx/log:/app/log
# (API: OPTION 1) use for performance
- type: tmpfs
target: /app/api
# (API: OPTION 2) use when debugging issues
# - ${DEV_LOCATION}/api:/app/api
# ---------------------------------------------------------------------------
# DELETE START anyone trying to use this file: comment out / delete BELOW lines, they are only for development purposes
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/dhcp1.leases:/mnt/dhcp1.leases # test data for DCPLSS plugin
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/dhcp2.leases:/mnt/dhcp2.leases # test data for DCPLSS plugin
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/pihole_dhcp_full.leases:/etc/pihole/dhcp.leases # test data for DCPLSS plugin
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/pihole_dhcp_2.leases:/etc/pihole/dhcp2.leases # test data for DCPLSS plugin
- ${APP_DATA_LOCATION}/pihole/etc-pihole/pihole-FTL.db:/etc/pihole/pihole-FTL.db # test data for PIHOLE plugin
- ${DEV_LOCATION}/mkdocs.yml:/app/mkdocs.yml
- ${DEV_LOCATION}/docs:/app/docs
- ${DEV_LOCATION}/server:/app/server
- ${DEV_LOCATION}/test:/app/test
- ${DEV_LOCATION}/dockerfiles:/app/dockerfiles
# - ${APP_DATA_LOCATION}/netalertx/php.ini:/etc/php/8.2/fpm/php.ini
- ${DEV_LOCATION}/install:/app/install
- ${DEV_LOCATION}/front/css:/app/front/css
- ${DEV_LOCATION}/front/img:/app/front/img
- ${DEV_LOCATION}/back/update_vendors.sh:/app/back/update_vendors.sh
- ${DEV_LOCATION}/front/lib:/app/front/lib
- ${DEV_LOCATION}/front/js:/app/front/js
- ${DEV_LOCATION}/front/php:/app/front/php
- ${DEV_LOCATION}/front/deviceDetails.php:/app/front/deviceDetails.php
- ${DEV_LOCATION}/front/deviceDetailsEdit.php:/app/front/deviceDetailsEdit.php
- ${DEV_LOCATION}/front/userNotifications.php:/app/front/userNotifications.php
- ${DEV_LOCATION}/front/deviceDetailsTools.php:/app/front/deviceDetailsTools.php
- ${DEV_LOCATION}/front/deviceDetailsPresence.php:/app/front/deviceDetailsPresence.php
- ${DEV_LOCATION}/front/deviceDetailsSessions.php:/app/front/deviceDetailsSessions.php
- ${DEV_LOCATION}/front/deviceDetailsEvents.php:/app/front/deviceDetailsEvents.php
- ${DEV_LOCATION}/front/devices.php:/app/front/devices.php
- ${DEV_LOCATION}/front/events.php:/app/front/events.php
- ${DEV_LOCATION}/front/plugins.php:/app/front/plugins.php
- ${DEV_LOCATION}/front/pluginsCore.php:/app/front/pluginsCore.php
- ${DEV_LOCATION}/front/index.php:/app/front/index.php
- ${DEV_LOCATION}/front/initCheck.php:/app/front/initCheck.php
- ${DEV_LOCATION}/front/maintenance.php:/app/front/maintenance.php
- ${DEV_LOCATION}/front/network.php:/app/front/network.php
- ${DEV_LOCATION}/front/presence.php:/app/front/presence.php
- ${DEV_LOCATION}/front/settings.php:/app/front/settings.php
- ${DEV_LOCATION}/front/systeminfo.php:/app/front/systeminfo.php
- ${DEV_LOCATION}/front/systeminfoNetwork.php:/app/front/systeminfoNetwork.php
- ${DEV_LOCATION}/front/systeminfoServer.php:/app/front/systeminfoServer.php
- ${DEV_LOCATION}/front/systeminfoStorage.php:/app/front/systeminfoStorage.php
- ${DEV_LOCATION}/front/cloud_services.php:/app/front/cloud_services.php
- ${DEV_LOCATION}/front/report.php:/app/front/report.php
- ${DEV_LOCATION}/front/workflows.php:/app/front/workflows.php
- ${DEV_LOCATION}/front/workflowsCore.php:/app/front/workflowsCore.php
- ${DEV_LOCATION}/front/appEvents.php:/app/front/appEvents.php
- ${DEV_LOCATION}/front/appEventsCore.php:/app/front/appEventsCore.php
- ${DEV_LOCATION}/front/multiEditCore.php:/app/front/multiEditCore.php
- ${DEV_LOCATION}/front/plugins:/app/front/plugins
# DELETE END anyone trying to use this file: comment out / delete ABOVE lines, they are only for development purposes
# ---------------------------------------------------------------------------
environment:
# - APP_CONF_OVERRIDE={"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20223","UI_theme":"Light"}
- TZ=${TZ}
- PORT=${PORT}
# ❗ DANGER ZONE BELOW - Setting ALWAYS_FRESH_INSTALL=true will delete the content of the /db & /config folders
- ALWAYS_FRESH_INSTALL=${ALWAYS_FRESH_INSTALL}
# - LOADED_PLUGINS=["DHCPLSS","PIHOLE","ASUSWRT","FREEBOX"]
- type: volume # Persistent Docker-managed Named Volume for storage
source: netalertx_data # the default name of the volume is netalertx_data
target: /data # consolidated configuration and database storage
read_only: false # writable volume
# Example custom local folder called /home/user/netalertx_data
# - type: bind
# source: /home/user/netalertx_data
# target: /data
# read_only: false
# ... or use the alternative format
# - /home/user/netalertx_data:/data:rw
- type: bind # Bind mount for timezone consistency
source: /etc/localtime
target: /etc/localtime
read_only: true
# Use a custom Enterprise-configured nginx config for ldap or other settings
# - /custom-enterprise.conf:/tmp/nginx/active-config/netalertx.conf:ro
# Test your plugin on the production container
# - /path/on/host:/app/front/plugins/custom
# Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts
# - /path/on/host/log:/tmp/log
# tmpfs mounts for writable directories in a read-only container and improve system performance
# All writes now live under /tmp/* subdirectories which are created dynamically by entrypoint.d scripts
# uid=20211 and gid=20211 is the netalertx user inside the container
# mode=1700 gives rwx------ permissions to the netalertx user only
tmpfs:
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
environment:
LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0} # Listen for connections on all interfaces
PORT: ${PORT:-20211} # Application port
GRAPHQL_PORT: ${GRAPHQL_PORT:-20212} # GraphQL API port
ALWAYS_FRESH_INSTALL: ${ALWAYS_FRESH_INSTALL:-false} # Set to true to reset your config and database on each container start
NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0} # 0=kill all services and restart if any dies. 1 keeps running dead services.
# Resource limits to prevent resource exhaustion
mem_limit: 2048m # Maximum memory usage
mem_reservation: 1024m # Soft memory limit
cpu_shares: 512 # Relative CPU weight for CPU contention scenarios
pids_limit: 512 # Limit the number of processes/threads to prevent fork bombs
logging:
driver: "json-file" # Use JSON file logging driver
options:
max-size: "10m" # Rotate log files after they reach 10MB
max-file: "3" # Keep a maximum of 3 log files
# Always restart the container unless explicitly stopped
restart: unless-stopped
volumes: # Persistent volume for configuration and database storage
netalertx_data:

534
docker_build.log Executable file
View File

@@ -0,0 +1,534 @@
#0 building with "default" instance using docker driver
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 5.29kB done
#1 DONE 0.1s
#2 [auth] library/alpine:pull token for registry-1.docker.io
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/alpine:3.22
#3 DONE 0.4s
#4 [internal] load .dockerignore
#4 transferring context: 216B done
#4 DONE 0.1s
#5 [builder 1/15] FROM docker.io/library/alpine:3.22@sha256:4bcff63911fcb4448bd4fdacec207030997caf25e9bea4045fa6c8c44de311d1
#5 CACHED
#6 [internal] load build context
#6 transferring context: 36.76kB 0.0s done
#6 DONE 0.1s
#7 [builder 2/15] RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git && python -m venv /opt/venv
#7 0.443 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
#7 0.688 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
#7 1.107 (1/52) Upgrading libcrypto3 (3.5.1-r0 -> 3.5.3-r0)
#7 1.358 (2/52) Upgrading libssl3 (3.5.1-r0 -> 3.5.3-r0)
#7 1.400 (3/52) Installing ncurses-terminfo-base (6.5_p20250503-r0)
#7 1.413 (4/52) Installing libncursesw (6.5_p20250503-r0)
#7 1.444 (5/52) Installing readline (8.2.13-r1)
#7 1.471 (6/52) Installing bash (5.2.37-r0)
#7 1.570 Executing bash-5.2.37-r0.post-install
#7 1.593 (7/52) Installing libgcc (14.2.0-r6)
#7 1.605 (8/52) Installing jansson (2.14.1-r0)
#7 1.613 (9/52) Installing libstdc++ (14.2.0-r6)
#7 1.705 (10/52) Installing zstd-libs (1.5.7-r0)
#7 1.751 (11/52) Installing binutils (2.44-r3)
#7 2.041 (12/52) Installing libgomp (14.2.0-r6)
#7 2.064 (13/52) Installing libatomic (14.2.0-r6)
#7 2.071 (14/52) Installing gmp (6.3.0-r3)
#7 2.097 (15/52) Installing isl26 (0.26-r1)
#7 2.183 (16/52) Installing mpfr4 (4.2.1_p1-r0)
#7 2.219 (17/52) Installing mpc1 (1.3.1-r1)
#7 2.231 (18/52) Installing gcc (14.2.0-r6)
#7 6.782 (19/52) Installing brotli-libs (1.1.0-r2)
#7 6.828 (20/52) Installing c-ares (1.34.5-r0)
#7 6.846 (21/52) Installing libunistring (1.3-r0)
#7 6.919 (22/52) Installing libidn2 (2.3.7-r0)
#7 6.937 (23/52) Installing nghttp2-libs (1.65.0-r0)
#7 6.950 (24/52) Installing libpsl (0.21.5-r3)
#7 6.960 (25/52) Installing libcurl (8.14.1-r1)
#7 7.015 (26/52) Installing libexpat (2.7.2-r0)
#7 7.029 (27/52) Installing pcre2 (10.43-r1)
#7 7.069 (28/52) Installing git (2.49.1-r0)
#7 7.397 (29/52) Installing git-init-template (2.49.1-r0)
#7 7.404 (30/52) Installing linux-headers (6.14.2-r0)
#7 7.572 (31/52) Installing libffi (3.4.8-r0)
#7 7.578 (32/52) Installing pkgconf (2.4.3-r0)
#7 7.593 (33/52) Installing libffi-dev (3.4.8-r0)
#7 7.607 (34/52) Installing musl-dev (1.2.5-r10)
#7 7.961 (35/52) Installing openssl-dev (3.5.3-r0)
#7 8.021 (36/52) Installing libbz2 (1.0.8-r6)
#7 8.045 (37/52) Installing gdbm (1.24-r0)
#7 8.055 (38/52) Installing xz-libs (5.8.1-r0)
#7 8.071 (39/52) Installing mpdecimal (4.0.1-r0)
#7 8.090 (40/52) Installing libpanelw (6.5_p20250503-r0)
#7 8.098 (41/52) Installing sqlite-libs (3.49.2-r1)
#7 8.185 (42/52) Installing python3 (3.12.11-r0)
#7 8.904 (43/52) Installing python3-pycache-pyc0 (3.12.11-r0)
#7 9.292 (44/52) Installing pyc (3.12.11-r0)
#7 9.292 (45/52) Installing python3-pyc (3.12.11-r0)
#7 9.292 (46/52) Installing python3-dev (3.12.11-r0)
#7 10.71 (47/52) Installing libmd (1.1.0-r0)
#7 10.72 (48/52) Installing libbsd (0.12.2-r0)
#7 10.73 (49/52) Installing skalibs-libs (2.14.4.0-r0)
#7 10.75 (50/52) Installing utmps-libs (0.1.3.1-r0)
#7 10.76 (51/52) Installing linux-pam (1.7.0-r4)
#7 10.82 (52/52) Installing shadow (4.17.3-r0)
#7 10.88 Executing busybox-1.37.0-r18.trigger
#7 10.90 OK: 274 MiB in 66 packages
#7 DONE 14.4s
#8 [builder 3/15] RUN mkdir -p /app
#8 DONE 0.5s
#9 [builder 4/15] COPY api /app/api
#9 DONE 0.3s
#10 [builder 5/15] COPY back /app/back
#10 DONE 0.3s
#11 [builder 6/15] COPY config /app/config
#11 DONE 0.3s
#12 [builder 7/15] COPY db /app/db
#12 DONE 0.3s
#13 [builder 8/15] COPY dockerfiles /app/dockerfiles
#13 DONE 0.3s
#14 [builder 9/15] COPY front /app/front
#14 DONE 0.4s
#15 [builder 10/15] COPY server /app/server
#15 DONE 0.3s
#16 [builder 11/15] COPY install/crontab /etc/crontabs/root
#16 DONE 0.3s
#17 [builder 12/15] COPY dockerfiles/start* /start*.sh
#17 DONE 0.3s
#18 [builder 13/15] RUN pip install openwrt-luci-rpc asusrouter asyncio aiohttp graphene flask flask-cors unifi-sm-api tplink-omada-client wakeonlan pycryptodome requests paho-mqtt scapy cron-converter pytz json2table dhcp-leases pyunifi speedtest-cli chardet python-nmap dnspython librouteros yattag git+https://github.com/foreign-sub/aiofreepybox.git
#18 0.737 Collecting git+https://github.com/foreign-sub/aiofreepybox.git
#18 0.737 Cloning https://github.com/foreign-sub/aiofreepybox.git to /tmp/pip-req-build-waf5_npl
#18 0.738 Running command git clone --filter=blob:none --quiet https://github.com/foreign-sub/aiofreepybox.git /tmp/pip-req-build-waf5_npl
#18 1.617 Resolved https://github.com/foreign-sub/aiofreepybox.git to commit 4ee18ea0f3e76edc839c48eb8df1da59c1baee3d
#18 1.620 Installing build dependencies: started
#18 3.337 Installing build dependencies: finished with status 'done'
#18 3.337 Getting requirements to build wheel: started
#18 3.491 Getting requirements to build wheel: finished with status 'done'
#18 3.492 Preparing metadata (pyproject.toml): started
#18 3.650 Preparing metadata (pyproject.toml): finished with status 'done'
#18 3.724 Collecting openwrt-luci-rpc
#18 3.753 Downloading openwrt_luci_rpc-1.1.17-py2.py3-none-any.whl.metadata (4.9 kB)
#18 3.892 Collecting asusrouter
#18 3.900 Downloading asusrouter-1.21.0-py3-none-any.whl.metadata (33 kB)
#18 3.999 Collecting asyncio
#18 4.007 Downloading asyncio-4.0.0-py3-none-any.whl.metadata (994 bytes)
#18 4.576 Collecting aiohttp
#18 4.582 Downloading aiohttp-3.12.15-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (7.7 kB)
#18 4.729 Collecting graphene
#18 4.735 Downloading graphene-3.4.3-py2.py3-none-any.whl.metadata (6.9 kB)
#18 4.858 Collecting flask
#18 4.866 Downloading flask-3.1.2-py3-none-any.whl.metadata (3.2 kB)
#18 4.963 Collecting flask-cors
#18 4.972 Downloading flask_cors-6.0.1-py3-none-any.whl.metadata (5.3 kB)
#18 5.055 Collecting unifi-sm-api
#18 5.065 Downloading unifi_sm_api-0.2.1-py3-none-any.whl.metadata (2.3 kB)
#18 5.155 Collecting tplink-omada-client
#18 5.166 Downloading tplink_omada_client-1.4.4-py3-none-any.whl.metadata (3.5 kB)
#18 5.262 Collecting wakeonlan
#18 5.274 Downloading wakeonlan-3.1.0-py3-none-any.whl.metadata (4.3 kB)
#18 5.500 Collecting pycryptodome
#18 5.505 Downloading pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl.metadata (3.4 kB)
#18 5.653 Collecting requests
#18 5.660 Downloading requests-2.32.5-py3-none-any.whl.metadata (4.9 kB)
#18 5.764 Collecting paho-mqtt
#18 5.775 Downloading paho_mqtt-2.1.0-py3-none-any.whl.metadata (23 kB)
#18 5.890 Collecting scapy
#18 5.902 Downloading scapy-2.6.1-py3-none-any.whl.metadata (5.6 kB)
#18 6.002 Collecting cron-converter
#18 6.013 Downloading cron_converter-1.2.2-py3-none-any.whl.metadata (8.1 kB)
#18 6.187 Collecting pytz
#18 6.193 Downloading pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
#18 6.285 Collecting json2table
#18 6.294 Downloading json2table-1.1.5-py2.py3-none-any.whl.metadata (6.0 kB)
#18 6.381 Collecting dhcp-leases
#18 6.387 Downloading dhcp_leases-0.1.6-py3-none-any.whl.metadata (5.9 kB)
#18 6.461 Collecting pyunifi
#18 6.471 Downloading pyunifi-2.21-py3-none-any.whl.metadata (274 bytes)
#18 6.582 Collecting speedtest-cli
#18 6.596 Downloading speedtest_cli-2.1.3-py2.py3-none-any.whl.metadata (6.8 kB)
#18 6.767 Collecting chardet
#18 6.780 Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)
#18 6.878 Collecting python-nmap
#18 6.886 Downloading python-nmap-0.7.1.tar.gz (44 kB)
#18 6.937 Installing build dependencies: started
#18 8.245 Installing build dependencies: finished with status 'done'
#18 8.246 Getting requirements to build wheel: started
#18 8.411 Getting requirements to build wheel: finished with status 'done'
#18 8.412 Preparing metadata (pyproject.toml): started
#18 8.575 Preparing metadata (pyproject.toml): finished with status 'done'
#18 8.648 Collecting dnspython
#18 8.654 Downloading dnspython-2.8.0-py3-none-any.whl.metadata (5.7 kB)
#18 8.741 Collecting librouteros
#18 8.752 Downloading librouteros-3.4.1-py3-none-any.whl.metadata (1.6 kB)
#18 8.869 Collecting yattag
#18 8.881 Downloading yattag-1.16.1.tar.gz (29 kB)
#18 8.925 Installing build dependencies: started
#18 10.23 Installing build dependencies: finished with status 'done'
#18 10.23 Getting requirements to build wheel: started
#18 10.38 Getting requirements to build wheel: finished with status 'done'
#18 10.39 Preparing metadata (pyproject.toml): started
#18 10.55 Preparing metadata (pyproject.toml): finished with status 'done'
#18 10.60 Collecting Click>=6.0 (from openwrt-luci-rpc)
#18 10.60 Downloading click-8.3.0-py3-none-any.whl.metadata (2.6 kB)
#18 10.70 Collecting packaging>=19.1 (from openwrt-luci-rpc)
#18 10.71 Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB)
#18 10.87 Collecting urllib3>=1.26.14 (from asusrouter)
#18 10.88 Downloading urllib3-2.5.0-py3-none-any.whl.metadata (6.5 kB)
#18 10.98 Collecting xmltodict>=0.12.0 (from asusrouter)
#18 10.98 Downloading xmltodict-1.0.2-py3-none-any.whl.metadata (15 kB)
#18 11.09 Collecting aiohappyeyeballs>=2.5.0 (from aiohttp)
#18 11.10 Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB)
#18 11.19 Collecting aiosignal>=1.4.0 (from aiohttp)
#18 11.20 Downloading aiosignal-1.4.0-py3-none-any.whl.metadata (3.7 kB)
#18 11.32 Collecting attrs>=17.3.0 (from aiohttp)
#18 11.33 Downloading attrs-25.3.0-py3-none-any.whl.metadata (10 kB)
#18 11.47 Collecting frozenlist>=1.1.1 (from aiohttp)
#18 11.47 Downloading frozenlist-1.7.0-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (18 kB)
#18 11.76 Collecting multidict<7.0,>=4.5 (from aiohttp)
#18 11.77 Downloading multidict-6.6.4-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (5.3 kB)
#18 11.87 Collecting propcache>=0.2.0 (from aiohttp)
#18 11.88 Downloading propcache-0.3.2-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (12 kB)
#18 12.19 Collecting yarl<2.0,>=1.17.0 (from aiohttp)
#18 12.20 Downloading yarl-1.20.1-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (73 kB)
#18 12.31 Collecting graphql-core<3.3,>=3.1 (from graphene)
#18 12.32 Downloading graphql_core-3.2.6-py3-none-any.whl.metadata (11 kB)
#18 12.41 Collecting graphql-relay<3.3,>=3.1 (from graphene)
#18 12.42 Downloading graphql_relay-3.2.0-py3-none-any.whl.metadata (12 kB)
#18 12.50 Collecting python-dateutil<3,>=2.7.0 (from graphene)
#18 12.51 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
#18 12.61 Collecting typing-extensions<5,>=4.7.1 (from graphene)
#18 12.61 Downloading typing_extensions-4.15.0-py3-none-any.whl.metadata (3.3 kB)
#18 12.71 Collecting blinker>=1.9.0 (from flask)
#18 12.72 Downloading blinker-1.9.0-py3-none-any.whl.metadata (1.6 kB)
#18 12.84 Collecting itsdangerous>=2.2.0 (from flask)
#18 12.85 Downloading itsdangerous-2.2.0-py3-none-any.whl.metadata (1.9 kB)
#18 12.97 Collecting jinja2>=3.1.2 (from flask)
#18 12.98 Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
#18 13.15 Collecting markupsafe>=2.1.1 (from flask)
#18 13.15 Downloading MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (4.0 kB)
#18 13.28 Collecting werkzeug>=3.1.0 (from flask)
#18 13.29 Downloading werkzeug-3.1.3-py3-none-any.whl.metadata (3.7 kB)
#18 13.42 Collecting awesomeversion>=22.9.0 (from tplink-omada-client)
#18 13.42 Downloading awesomeversion-25.8.0-py3-none-any.whl.metadata (9.8 kB)
#18 13.59 Collecting charset_normalizer<4,>=2 (from requests)
#18 13.59 Downloading charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (36 kB)
#18 13.77 Collecting idna<4,>=2.5 (from requests)
#18 13.78 Downloading idna-3.10-py3-none-any.whl.metadata (10 kB)
#18 13.94 Collecting certifi>=2017.4.17 (from requests)
#18 13.94 Downloading certifi-2025.8.3-py3-none-any.whl.metadata (2.4 kB)
#18 14.06 Collecting toml<0.11.0,>=0.10.2 (from librouteros)
#18 14.07 Downloading toml-0.10.2-py2.py3-none-any.whl.metadata (7.1 kB)
#18 14.25 Collecting six>=1.5 (from python-dateutil<3,>=2.7.0->graphene)
#18 14.26 Downloading six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
#18 14.33 Downloading openwrt_luci_rpc-1.1.17-py2.py3-none-any.whl (9.5 kB)
#18 14.37 Downloading asusrouter-1.21.0-py3-none-any.whl (131 kB)
#18 14.43 Downloading asyncio-4.0.0-py3-none-any.whl (5.6 kB)
#18 14.47 Downloading aiohttp-3.12.15-cp312-cp312-musllinux_1_2_x86_64.whl (1.7 MB)
#18 14.67 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 8.3 MB/s eta 0:00:00
#18 14.68 Downloading graphene-3.4.3-py2.py3-none-any.whl (114 kB)
#18 14.73 Downloading flask-3.1.2-py3-none-any.whl (103 kB)
#18 14.78 Downloading flask_cors-6.0.1-py3-none-any.whl (13 kB)
#18 14.84 Downloading unifi_sm_api-0.2.1-py3-none-any.whl (16 kB)
#18 14.88 Downloading tplink_omada_client-1.4.4-py3-none-any.whl (46 kB)
#18 14.93 Downloading wakeonlan-3.1.0-py3-none-any.whl (5.0 kB)
#18 14.99 Downloading pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl (2.3 MB)
#18 15.23 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 8.9 MB/s eta 0:00:00
#18 15.24 Downloading requests-2.32.5-py3-none-any.whl (64 kB)
#18 15.30 Downloading paho_mqtt-2.1.0-py3-none-any.whl (67 kB)
#18 15.34 Downloading scapy-2.6.1-py3-none-any.whl (2.4 MB)
#18 15.62 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 8.5 MB/s eta 0:00:00
#18 15.63 Downloading cron_converter-1.2.2-py3-none-any.whl (13 kB)
#18 15.67 Downloading pytz-2025.2-py2.py3-none-any.whl (509 kB)
#18 15.76 Downloading json2table-1.1.5-py2.py3-none-any.whl (8.7 kB)
#18 15.81 Downloading dhcp_leases-0.1.6-py3-none-any.whl (11 kB)
#18 15.86 Downloading pyunifi-2.21-py3-none-any.whl (11 kB)
#18 15.90 Downloading speedtest_cli-2.1.3-py2.py3-none-any.whl (23 kB)
#18 15.95 Downloading chardet-5.2.0-py3-none-any.whl (199 kB)
#18 16.01 Downloading dnspython-2.8.0-py3-none-any.whl (331 kB)
#18 16.10 Downloading librouteros-3.4.1-py3-none-any.whl (16 kB)
#18 16.14 Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB)
#18 16.20 Downloading aiosignal-1.4.0-py3-none-any.whl (7.5 kB)
#18 16.24 Downloading attrs-25.3.0-py3-none-any.whl (63 kB)
#18 16.30 Downloading awesomeversion-25.8.0-py3-none-any.whl (15 kB)
#18 16.34 Downloading blinker-1.9.0-py3-none-any.whl (8.5 kB)
#18 16.39 Downloading certifi-2025.8.3-py3-none-any.whl (161 kB)
#18 16.45 Downloading charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_x86_64.whl (153 kB)
#18 16.50 Downloading click-8.3.0-py3-none-any.whl (107 kB)
#18 16.55 Downloading frozenlist-1.7.0-cp312-cp312-musllinux_1_2_x86_64.whl (237 kB)
#18 16.62 Downloading graphql_core-3.2.6-py3-none-any.whl (203 kB)
#18 16.69 Downloading graphql_relay-3.2.0-py3-none-any.whl (16 kB)
#18 16.73 Downloading idna-3.10-py3-none-any.whl (70 kB)
#18 16.79 Downloading itsdangerous-2.2.0-py3-none-any.whl (16 kB)
#18 16.84 Downloading jinja2-3.1.6-py3-none-any.whl (134 kB)
#18 16.96 Downloading MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl (23 kB)
#18 17.02 Downloading multidict-6.6.4-cp312-cp312-musllinux_1_2_x86_64.whl (251 kB)
#18 17.09 Downloading packaging-25.0-py3-none-any.whl (66 kB)
#18 17.14 Downloading propcache-0.3.2-cp312-cp312-musllinux_1_2_x86_64.whl (222 kB)
#18 17.21 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
#18 17.28 Downloading toml-0.10.2-py2.py3-none-any.whl (16 kB)
#18 17.33 Downloading typing_extensions-4.15.0-py3-none-any.whl (44 kB)
#18 17.39 Downloading urllib3-2.5.0-py3-none-any.whl (129 kB)
#18 17.44 Downloading werkzeug-3.1.3-py3-none-any.whl (224 kB)
#18 17.51 Downloading xmltodict-1.0.2-py3-none-any.whl (13 kB)
#18 17.56 Downloading yarl-1.20.1-cp312-cp312-musllinux_1_2_x86_64.whl (374 kB)
#18 17.65 Downloading six-1.17.0-py2.py3-none-any.whl (11 kB)
#18 17.77 Building wheels for collected packages: python-nmap, yattag, aiofreepybox
#18 17.77 Building wheel for python-nmap (pyproject.toml): started
#18 17.95 Building wheel for python-nmap (pyproject.toml): finished with status 'done'
#18 17.96 Created wheel for python-nmap: filename=python_nmap-0.7.1-py2.py3-none-any.whl size=20679 sha256=ecd9b14109651cfaa5bf035f90076b9442985cc254fa5f8a49868fc896e86edb
#18 17.96 Stored in directory: /root/.cache/pip/wheels/06/fc/d4/0957e1d9942e696188208772ea0abf909fe6eb3d9dff6e5a9e
#18 17.96 Building wheel for yattag (pyproject.toml): started
#18 18.14 Building wheel for yattag (pyproject.toml): finished with status 'done'
#18 18.14 Created wheel for yattag: filename=yattag-1.16.1-py3-none-any.whl size=15930 sha256=2135fc2034a3847c81eb6a0d7b85608e8272339fa5c1961f87b02dfe6d74d0ad
#18 18.14 Stored in directory: /root/.cache/pip/wheels/d2/2f/52/049ff4f7c8c9c932b2ece7ec800d7facf2a141ac5ab0ce7e51
#18 18.15 Building wheel for aiofreepybox (pyproject.toml): started
#18 18.36 Building wheel for aiofreepybox (pyproject.toml): finished with status 'done'
#18 18.36 Created wheel for aiofreepybox: filename=aiofreepybox-6.0.0-py3-none-any.whl size=60051 sha256=dbdee5350b10b6550ede50bc779381b7f39f1e5d5da889f2ee98cb5a869d3425
#18 18.36 Stored in directory: /tmp/pip-ephem-wheel-cache-93bgc4e2/wheels/3c/d3/ae/fb97a84a29a5fbe8517de58d67e66586505440af35981e0dd3
#18 18.36 Successfully built python-nmap yattag aiofreepybox
#18 18.45 Installing collected packages: yattag, speedtest-cli, pytz, python-nmap, json2table, dhcp-leases, xmltodict, wakeonlan, urllib3, typing-extensions, toml, six, scapy, pycryptodome, propcache, paho-mqtt, packaging, multidict, markupsafe, itsdangerous, idna, graphql-core, frozenlist, dnspython, Click, charset_normalizer, chardet, certifi, blinker, awesomeversion, attrs, asyncio, aiohappyeyeballs, yarl, werkzeug, requests, python-dateutil, librouteros, jinja2, graphql-relay, aiosignal, unifi-sm-api, pyunifi, openwrt-luci-rpc, graphene, flask, cron-converter, aiohttp, tplink-omada-client, flask-cors, asusrouter, aiofreepybox
#18 24.35 Successfully installed Click-8.3.0 aiofreepybox-6.0.0 aiohappyeyeballs-2.6.1 aiohttp-3.12.15 aiosignal-1.4.0 asusrouter-1.21.0 asyncio-4.0.0 attrs-25.3.0 awesomeversion-25.8.0 blinker-1.9.0 certifi-2025.8.3 chardet-5.2.0 charset_normalizer-3.4.3 cron-converter-1.2.2 dhcp-leases-0.1.6 dnspython-2.8.0 flask-3.1.2 flask-cors-6.0.1 frozenlist-1.7.0 graphene-3.4.3 graphql-core-3.2.6 graphql-relay-3.2.0 idna-3.10 itsdangerous-2.2.0 jinja2-3.1.6 json2table-1.1.5 librouteros-3.4.1 markupsafe-3.0.2 multidict-6.6.4 openwrt-luci-rpc-1.1.17 packaging-25.0 paho-mqtt-2.1.0 propcache-0.3.2 pycryptodome-3.23.0 python-dateutil-2.9.0.post0 python-nmap-0.7.1 pytz-2025.2 pyunifi-2.21 requests-2.32.5 scapy-2.6.1 six-1.17.0 speedtest-cli-2.1.3 toml-0.10.2 tplink-omada-client-1.4.4 typing-extensions-4.15.0 unifi-sm-api-0.2.1 urllib3-2.5.0 wakeonlan-3.1.0 werkzeug-3.1.3 xmltodict-1.0.2 yarl-1.20.1 yattag-1.16.1
#18 24.47
#18 24.47 [notice] A new release of pip is available: 25.0.1 -> 25.2
#18 24.47 [notice] To update, run: pip install --upgrade pip
#18 DONE 25.1s
#19 [builder 14/15] RUN bash -c "find /app -type d -exec chmod 750 {} \;" && bash -c "find /app -type f -exec chmod 640 {} \;" && bash -c "find /app -type f \( -name '*.sh' -o -name '*.py' -o -name 'speedtest-cli' \) -exec chmod 750 {} \;"
#19 DONE 11.9s
#20 [builder 15/15] COPY install/freebox_certificate.pem /opt/venv/lib/python3.12/site-packages/aiofreepybox/freebox_certificates.pem
#20 DONE 0.4s
#21 [runner 2/14] COPY --from=builder /opt/venv /opt/venv
#21 DONE 0.8s
#22 [runner 3/14] COPY --from=builder /usr/sbin/usermod /usr/sbin/groupmod /usr/sbin/
#22 DONE 0.4s
#23 [runner 4/14] RUN apk update --no-cache && apk add --no-cache bash libbsd zip lsblk gettext-envsubst sudo mtr tzdata s6-overlay && apk add --no-cache curl arp-scan iproute2 iproute2-ss nmap nmap-scripts traceroute nbtscan avahi avahi-tools openrc dbus net-tools net-snmp-tools bind-tools awake ca-certificates && apk add --no-cache sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session && apk add --no-cache python3 nginx && ln -s /usr/bin/awake /usr/bin/wakeonlan && bash -c "install -d -m 750 -o nginx -g www-data /app /app" && rm -f /etc/nginx/http.d/default.conf
#23 0.487 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
#23 0.696 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
#23 1.156 v3.22.1-472-ga67443520d6 [https://dl-cdn.alpinelinux.org/alpine/v3.22/main]
#23 1.156 v3.22.1-473-gcd551a4e006 [https://dl-cdn.alpinelinux.org/alpine/v3.22/community]
#23 1.156 OK: 26326 distinct packages available
#23 1.195 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
#23 1.276 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
#23 1.568 (1/38) Installing ncurses-terminfo-base (6.5_p20250503-r0)
#23 1.580 (2/38) Installing libncursesw (6.5_p20250503-r0)
#23 1.629 (3/38) Installing readline (8.2.13-r1)
#23 1.659 (4/38) Installing bash (5.2.37-r0)
#23 1.723 Executing bash-5.2.37-r0.post-install
#23 1.740 (5/38) Installing libintl (0.24.1-r0)
#23 1.749 (6/38) Installing gettext-envsubst (0.24.1-r0)
#23 1.775 (7/38) Installing libmd (1.1.0-r0)
#23 1.782 (8/38) Installing libbsd (0.12.2-r0)
#23 1.807 (9/38) Installing libeconf (0.6.3-r0)
#23 1.812 (10/38) Installing libblkid (2.41-r9)
#23 1.831 (11/38) Installing libmount (2.41-r9)
#23 1.857 (12/38) Installing libsmartcols (2.41-r9)
#23 1.872 (13/38) Installing lsblk (2.41-r9)
#23 1.886 (14/38) Installing libcap2 (2.76-r0)
#23 1.897 (15/38) Installing jansson (2.14.1-r0)
#23 1.910 (16/38) Installing mtr (0.96-r0)
#23 1.948 (17/38) Installing skalibs-libs (2.14.4.0-r0)
#23 1.966 (18/38) Installing execline-libs (2.9.7.0-r0)
#23 1.974 (19/38) Installing execline (2.9.7.0-r0)
#23 1.996 Executing execline-2.9.7.0-r0.post-install
#23 2.004 (20/38) Installing s6-ipcserver (2.13.2.0-r0)
#23 2.010 (21/38) Installing s6-libs (2.13.2.0-r0)
#23 2.016 (22/38) Installing s6 (2.13.2.0-r0)
#23 2.033 Executing s6-2.13.2.0-r0.pre-install
#23 2.159 (23/38) Installing s6-rc-libs (0.5.6.0-r0)
#23 2.164 (24/38) Installing s6-rc (0.5.6.0-r0)
#23 2.175 (25/38) Installing s6-linux-init (1.1.3.0-r0)
#23 2.185 (26/38) Installing s6-portable-utils (2.3.1.0-r0)
#23 2.193 (27/38) Installing s6-linux-utils (2.6.3.0-r0)
#23 2.200 (28/38) Installing s6-dns-libs (2.4.1.0-r0)
#23 2.208 (29/38) Installing s6-dns (2.4.1.0-r0)
#23 2.222 (30/38) Installing bearssl-libs (0.6_git20241009-r0)
#23 2.254 (31/38) Installing s6-networking-libs (2.7.1.0-r0)
#23 2.264 (32/38) Installing s6-networking (2.7.1.0-r0)
#23 2.286 (33/38) Installing s6-overlay-helpers (0.1.2.0-r0)
#23 2.355 (34/38) Installing s6-overlay (3.2.0.3-r0)
#23 2.380 (35/38) Installing sudo (1.9.17_p2-r0)
#23 2.511 (36/38) Installing tzdata (2025b-r0)
#23 2.641 (37/38) Installing unzip (6.0-r15)
#23 2.659 (38/38) Installing zip (3.0-r13)
#23 2.694 Executing busybox-1.37.0-r18.trigger
#23 2.725 OK: 16 MiB in 54 packages
#23 2.778 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
#23 2.918 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
#23 3.218 (1/77) Installing libpcap (1.10.5-r1)
#23 3.234 (2/77) Installing arp-scan (1.10.0-r2)
#23 3.289 (3/77) Installing dbus-libs (1.16.2-r1)
#23 3.307 (4/77) Installing avahi-libs (0.8-r21)
#23 3.315 (5/77) Installing libdaemon (0.14-r6)
#23 3.322 (6/77) Installing libevent (2.1.12-r8)
#23 3.355 (7/77) Installing libexpat (2.7.2-r0)
#23 3.368 (8/77) Installing avahi (0.8-r21)
#23 3.387 Executing avahi-0.8-r21.pre-install
#23 3.465 (9/77) Installing gdbm (1.24-r0)
#23 3.477 (10/77) Installing avahi-tools (0.8-r21)
#23 3.483 (11/77) Installing libbz2 (1.0.8-r6)
#23 3.490 (12/77) Installing libffi (3.4.8-r0)
#23 3.496 (13/77) Installing xz-libs (5.8.1-r0)
#23 3.517 (14/77) Installing libgcc (14.2.0-r6)
#23 3.529 (15/77) Installing libstdc++ (14.2.0-r6)
#23 3.613 (16/77) Installing mpdecimal (4.0.1-r0)
#23 3.628 (17/77) Installing libpanelw (6.5_p20250503-r0)
#23 3.634 (18/77) Installing sqlite-libs (3.49.2-r1)
#23 3.783 (19/77) Installing python3 (3.12.11-r0)
#23 4.494 (20/77) Installing python3-pycache-pyc0 (3.12.11-r0)
#23 4.915 (21/77) Installing pyc (3.12.11-r0)
#23 4.915 (22/77) Installing py3-awake-pyc (1.0-r12)
#23 4.922 (23/77) Installing python3-pyc (3.12.11-r0)
#23 4.922 (24/77) Installing py3-awake (1.0-r12)
#23 4.928 (25/77) Installing awake (1.0-r12)
#23 4.932 (26/77) Installing fstrm (0.6.1-r4)
#23 4.940 (27/77) Installing krb5-conf (1.0-r2)
#23 5.017 (28/77) Installing libcom_err (1.47.2-r2)
#23 5.026 (29/77) Installing keyutils-libs (1.6.3-r4)
#23 5.033 (30/77) Installing libverto (0.3.2-r2)
#23 5.039 (31/77) Installing krb5-libs (1.21.3-r0)
#23 5.115 (32/77) Installing json-c (0.18-r1)
#23 5.123 (33/77) Installing nghttp2-libs (1.65.0-r0)
#23 5.136 (34/77) Installing protobuf-c (1.5.2-r0)
#23 5.142 (35/77) Installing userspace-rcu (0.15.2-r0)
#23 5.161 (36/77) Installing libuv (1.51.0-r0)
#23 5.178 (37/77) Installing libxml2 (2.13.8-r0)
#23 5.232 (38/77) Installing bind-libs (9.20.13-r0)
#23 5.355 (39/77) Installing bind-tools (9.20.13-r0)
#23 5.395 (40/77) Installing ca-certificates (20250619-r0)
#23 5.518 (41/77) Installing brotli-libs (1.1.0-r2)
#23 5.559 (42/77) Installing c-ares (1.34.5-r0)
#23 5.573 (43/77) Installing libunistring (1.3-r0)
#23 5.645 (44/77) Installing libidn2 (2.3.7-r0)
#23 5.664 (45/77) Installing libpsl (0.21.5-r3)
#23 5.676 (46/77) Installing zstd-libs (1.5.7-r0)
#23 5.720 (47/77) Installing libcurl (8.14.1-r1)
#23 5.753 (48/77) Installing curl (8.14.1-r1)
#23 5.778 (49/77) Installing dbus (1.16.2-r1)
#23 5.796 Executing dbus-1.16.2-r1.pre-install
#23 5.869 Executing dbus-1.16.2-r1.post-install
#23 5.887 (50/77) Installing dbus-daemon-launch-helper (1.16.2-r1)
#23 5.896 (51/77) Installing libelf (0.193-r0)
#23 5.908 (52/77) Installing libmnl (1.0.5-r2)
#23 5.915 (53/77) Installing iproute2-minimal (6.15.0-r0)
#23 5.954 (54/77) Installing libxtables (1.8.11-r1)
#23 5.963 (55/77) Installing iproute2-tc (6.15.0-r0)
#23 6.001 (56/77) Installing iproute2-ss (6.15.0-r0)
#23 6.014 (57/77) Installing iproute2 (6.15.0-r0)
#23 6.042 Executing iproute2-6.15.0-r0.post-install
#23 6.047 (58/77) Installing nbtscan (1.7.2-r0)
#23 6.053 (59/77) Installing net-snmp-libs (5.9.4-r1)
#23 6.112 (60/77) Installing net-snmp-agent-libs (5.9.4-r1)
#23 6.179 (61/77) Installing net-snmp-tools (5.9.4-r1)
#23 6.205 (62/77) Installing mii-tool (2.10-r3)
#23 6.211 (63/77) Installing net-tools (2.10-r3)
#23 6.235 (64/77) Installing lua5.4-libs (5.4.7-r0)
#23 6.258 (65/77) Installing libssh2 (1.11.1-r0)
#23 6.279 (66/77) Installing nmap (7.97-r0)
#23 6.524 (67/77) Installing nmap-nselibs (7.97-r0)
#23 6.729 (68/77) Installing nmap-scripts (7.97-r0)
#23 6.842 (69/77) Installing bridge (1.5-r5)
#23 6.904 (70/77) Installing ifupdown-ng (0.12.1-r7)
#23 6.915 (71/77) Installing ifupdown-ng-iproute2 (0.12.1-r7)
#23 6.920 (72/77) Installing openrc-user (0.62.6-r0)
#23 6.924 (73/77) Installing openrc (0.62.6-r0)
#23 7.013 Executing openrc-0.62.6-r0.post-install
#23 7.016 (74/77) Installing avahi-openrc (0.8-r21)
#23 7.021 (75/77) Installing dbus-openrc (1.16.2-r1)
#23 7.026 (76/77) Installing s6-openrc (2.13.2.0-r0)
#23 7.032 (77/77) Installing traceroute (2.1.6-r0)
#23 7.040 Executing busybox-1.37.0-r18.trigger
#23 7.042 Executing ca-certificates-20250619-r0.trigger
#23 7.101 Executing dbus-1.16.2-r1.trigger
#23 7.104 OK: 102 MiB in 131 packages
#23 7.156 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
#23 7.243 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
#23 7.543 (1/12) Installing php83-common (8.3.24-r0)
#23 7.551 (2/12) Installing argon2-libs (20190702-r5)
#23 7.557 (3/12) Installing libedit (20250104.3.1-r1)
#23 7.568 (4/12) Installing pcre2 (10.43-r1)
#23 7.600 (5/12) Installing php83 (8.3.24-r0)
#23 7.777 (6/12) Installing php83-cgi (8.3.24-r0)
#23 7.953 (7/12) Installing php83-curl (8.3.24-r0)
#23 7.968 (8/12) Installing acl-libs (2.3.2-r1)
#23 7.975 (9/12) Installing php83-fpm (8.3.24-r0)
#23 8.193 (10/12) Installing php83-session (8.3.24-r0)
#23 8.204 (11/12) Installing php83-sqlite3 (8.3.24-r0)
#23 8.213 (12/12) Installing sqlite (3.49.2-r1)
#23 8.309 Executing busybox-1.37.0-r18.trigger
#23 8.317 OK: 129 MiB in 143 packages
#23 8.369 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
#23 8.449 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
#23 8.747 (1/2) Installing nginx (1.28.0-r3)
#23 8.766 Executing nginx-1.28.0-r3.pre-install
#23 8.863 Executing nginx-1.28.0-r3.post-install
#23 8.865 (2/2) Installing nginx-openrc (1.28.0-r3)
#23 8.870 Executing busybox-1.37.0-r18.trigger
#23 8.873 OK: 130 MiB in 145 packages
#23 DONE 9.5s
#24 [runner 5/14] COPY --from=builder --chown=nginx:www-data /app/ /app/
#24 DONE 0.5s
#25 [runner 6/14] RUN mkdir -p /app/config /app/db /app/log/plugins
#25 DONE 0.5s
#26 [runner 7/14] COPY --chmod=600 --chown=root:root install/crontab /etc/crontabs/root
#26 DONE 0.3s
#27 [runner 8/14] COPY --chmod=755 dockerfiles/healthcheck.sh /usr/local/bin/healthcheck.sh
#27 DONE 0.3s
#28 [runner 9/14] RUN touch /app/log/app.log && touch /app/log/execution_queue.log && touch /app/log/app_front.log && touch /app/log/app.php_errors.log && touch /app/log/stderr.log && touch /app/log/stdout.log && touch /app/log/db_is_locked.log && touch /app/log/IP_changes.log && touch /app/log/report_output.txt && touch /app/log/report_output.html && touch /app/log/report_output.json && touch /app/api/user_notifications.json
#28 DONE 0.6s
#29 [runner 10/14] COPY dockerfiles /app/dockerfiles
#29 DONE 0.3s
#30 [runner 11/14] RUN chmod +x /app/dockerfiles/*.sh
#30 DONE 0.8s
#31 [runner 12/14] RUN /app/dockerfiles/init-nginx.sh && /app/dockerfiles/init-php-fpm.sh && /app/dockerfiles/init-crond.sh && /app/dockerfiles/init-backend.sh
#31 0.417 Initializing nginx...
#31 0.417 Setting webserver to address (0.0.0.0) and port (20211)
#31 0.418 /app/dockerfiles/init-nginx.sh: line 5: /app/install/netalertx.template.conf: No such file or directory
#31 0.611 nginx initialized.
#31 0.612 Initializing php-fpm...
#31 0.654 php-fpm initialized.
#31 0.655 Initializing crond...
#31 0.689 crond initialized.
#31 0.690 Initializing backend...
#31 12.19 Backend initialized.
#31 DONE 12.3s
#32 [runner 13/14] RUN rm -rf /app/dockerfiles
#32 DONE 0.6s
#33 [runner 14/14] RUN date +%s > /app/front/buildtimestamp.txt
#33 DONE 0.6s
#34 exporting to image
#34 exporting layers
#34 exporting layers 2.4s done
#34 writing image sha256:0afcbc41473de559eff0dd93250595494fe4d8ea620861e9e90d50a248fcefda 0.0s done
#34 naming to docker.io/library/netalertx 0.0s done
#34 DONE 2.5s

View File

@@ -1,674 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View File

@@ -1,169 +0,0 @@
#!/usr/bin/with-contenv bash
echo "---------------------------------------------------------
[INSTALL] Run init.sh
---------------------------------------------------------"
DEFAULT_PUID=102
DEFAULT_GID=82
PUID=${PUID:-${DEFAULT_PUID}}
PGID=${PGID:-${DEFAULT_GID}}
echo "[INSTALL] Setting up user UID and GID"
if ! groupmod -o -g "$PGID" www-data && [ "$PGID" != "$DEFAULT_GID" ] ; then
echo "Failed to set user GID to ${PGID}, trying with default GID ${DEFAULT_GID}"
groupmod -o -g "$DEFAULT_GID" www-data
fi
if ! usermod -o -u "$PUID" nginx && [ "$PUID" != "$DEFAULT_PUID" ] ; then
echo "Failed to set user UID to ${PUID}, trying with default PUID ${DEFAULT_PUID}"
usermod -o -u "$DEFAULT_PUID" nginx
fi
echo "
---------------------------------------------------------
GID/UID
---------------------------------------------------------
User UID: $(id -u nginx)
User GID: $(getent group www-data | cut -d: -f3)
---------------------------------------------------------"
chown nginx:nginx /run/nginx/ /var/log/nginx/ /var/lib/nginx/ /var/lib/nginx/tmp/
chgrp www-data /var/www/localhost/htdocs/
export INSTALL_DIR=/app # Specify the installation directory here
# DO NOT CHANGE ANYTHING BELOW THIS LINE!
CONF_FILE="app.conf"
NGINX_CONF_FILE=netalertx.conf
DB_FILE="app.db"
FULL_FILEDB_PATH="${INSTALL_DIR}/db/${DB_FILE}"
NGINX_CONFIG_FILE="/etc/nginx/http.d/${NGINX_CONF_FILE}"
OUI_FILE="/usr/share/arp-scan/ieee-oui.txt" # Define the path to ieee-oui.txt and ieee-iab.txt
INSTALL_DIR_OLD=/home/pi/pialert
OLD_APP_NAME=pialert
# DO NOT CHANGE ANYTHING ABOVE THIS LINE!
# Check if script is run as root
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root. Please use 'sudo'."
exit 1
fi
# DANGER ZONE: ALWAYS_FRESH_INSTALL
if [ "$ALWAYS_FRESH_INSTALL" = true ]; then
echo "[INSTALL] ❗ ALERT /db and /config folders are cleared because the ALWAYS_FRESH_INSTALL is set to: $ALWAYS_FRESH_INSTALL"
# Delete content of "$INSTALL_DIR/config/"
rm -rf "$INSTALL_DIR/config/"*
rm -rf "$INSTALL_DIR_OLD/config/"*
# Delete content of "$INSTALL_DIR/db/"
rm -rf "$INSTALL_DIR/db/"*
rm -rf "$INSTALL_DIR_OLD/db/"*
fi
# OVERRIDE settings: Handling APP_CONF_OVERRIDE
# Check if APP_CONF_OVERRIDE is set
# remove old
rm "${INSTALL_DIR}/config/app_conf_override.json"
if [ -z "$APP_CONF_OVERRIDE" ]; then
echo "APP_CONF_OVERRIDE is not set. Skipping config file creation."
else
# Save the APP_CONF_OVERRIDE env variable as a JSON file
echo "$APP_CONF_OVERRIDE" > "${INSTALL_DIR}/config/app_conf_override.json"
echo "Config file saved to ${INSTALL_DIR}/config/app_conf_override.json"
fi
# 🔻 FOR BACKWARD COMPATIBILITY - REMOVE AFTER 12/12/2025
# Check if pialert.db exists, then create a symbolic link to app.db
if [ -f "${INSTALL_DIR_OLD}/db/${OLD_APP_NAME}.db" ]; then
ln -s "${INSTALL_DIR_OLD}/db/${OLD_APP_NAME}.db" "${FULL_FILEDB_PATH}"
fi
# Check if ${OLD_APP_NAME}.conf exists, then create a symbolic link to app.conf
if [ -f "${INSTALL_DIR_OLD}/config/${OLD_APP_NAME}.conf" ]; then
ln -s "${INSTALL_DIR_OLD}/config/${OLD_APP_NAME}.conf" "${INSTALL_DIR}/config/${CONF_FILE}"
fi
# 🔺 FOR BACKWARD COMPATIBILITY - REMOVE AFTER 12/12/2025
echo "[INSTALL] Copy starter ${DB_FILE} and ${CONF_FILE} if they don't exist"
# Copy starter app.db, app.conf if they don't exist
cp -na "${INSTALL_DIR}/back/${CONF_FILE}" "${INSTALL_DIR}/config/${CONF_FILE}"
cp -na "${INSTALL_DIR}/back/${DB_FILE}" "${FULL_FILEDB_PATH}"
# if custom variables not set we do not need to do anything
if [ -n "${TZ}" ]; then
FILECONF="${INSTALL_DIR}/config/${CONF_FILE}"
echo "[INSTALL] Setup timezone"
sed -i "\#^TIMEZONE=#c\TIMEZONE='${TZ}'" "${FILECONF}"
# set TimeZone in container
cp /usr/share/zoneinfo/$TZ /etc/localtime
echo $TZ > /etc/timezone
fi
# if custom variables not set we do not need to do anything
if [ -n "${LOADED_PLUGINS}" ]; then
FILECONF="${INSTALL_DIR}/config/${CONF_FILE}"
echo "[INSTALL] Setup custom LOADED_PLUGINS variable"
sed -i "\#^LOADED_PLUGINS=#c\LOADED_PLUGINS=${LOADED_PLUGINS}" "${FILECONF}"
fi
echo "[INSTALL] Setup NGINX"
echo "Setting webserver to address ($LISTEN_ADDR) and port ($PORT)"
envsubst '$INSTALL_DIR $LISTEN_ADDR $PORT' < "${INSTALL_DIR}/install/netalertx.template.conf" > "${NGINX_CONFIG_FILE}"
# Run the hardware vendors update at least once
echo "[INSTALL] Run the hardware vendors update"
# Check if ieee-oui.txt or ieee-iab.txt exist
if [ -f "${OUI_FILE}" ]; then
echo "The file ieee-oui.txt exists. Skipping update_vendors.sh..."
else
echo "The file ieee-oui.txt does not exist. Running update_vendors..."
# Run the update_vendors.sh script
if [ -f "${INSTALL_DIR}/back/update_vendors.sh" ]; then
"${INSTALL_DIR}/back/update_vendors.sh"
else
echo "update_vendors.sh script not found in ${INSTALL_DIR}."
fi
fi
# Create an empty log files
# Create the execution_queue.log and app_front.log files if they don't exist
touch "${INSTALL_DIR}"/log/{app.log,execution_queue.log,app_front.log,app.php_errors.log,stderr.log,stdout.log,db_is_locked.log}
touch "${INSTALL_DIR}"/api/user_notifications.json
# Create plugins sub-directory if it doesn't exist in case a custom log folder is used
mkdir -p "${INSTALL_DIR}"/log/plugins
echo "[INSTALL] Fixing permissions after copied starter config & DB"
chown -R nginx:www-data "${INSTALL_DIR}"
chmod 750 "${INSTALL_DIR}"/{config,log,db}
find "${INSTALL_DIR}"/{config,log,db} -type f -exec chmod 640 {} \;
# Check if buildtimestamp.txt doesn't exist
if [ ! -f "${INSTALL_DIR}/front/buildtimestamp.txt" ]; then
# Create buildtimestamp.txt
date +%s > "${INSTALL_DIR}/front/buildtimestamp.txt"
chown nginx:www-data "${INSTALL_DIR}/front/buildtimestamp.txt"
fi
echo -e "
[ENV] PATH is ${PATH}
[ENV] PORT is ${PORT}
[ENV] TZ is ${TZ}
[ENV] LISTEN_ADDR is ${LISTEN_ADDR}
[ENV] ALWAYS_FRESH_INSTALL is ${ALWAYS_FRESH_INSTALL}
"

View File

@@ -1,49 +0,0 @@
#!/bin/bash
export INSTALL_DIR=/app
export APP_NAME=netalertx
# php-fpm setup
install -d -o nginx -g www-data /run/php/
sed -i "/^;pid/c\pid = /run/php/php8.3-fpm.pid" /etc/php83/php-fpm.conf
sed -i "/^listen/c\listen = /run/php/php8.3-fpm.sock" /etc/php83/php-fpm.d/www.conf
sed -i "/^;listen.owner/c\listen.owner = nginx" /etc/php83/php-fpm.d/www.conf
sed -i "/^;listen.group/c\listen.group = www-data" /etc/php83/php-fpm.d/www.conf
sed -i "/^user/c\user = nginx" /etc/php83/php-fpm.d/www.conf
sed -i "/^group/c\group = www-data" /etc/php83/php-fpm.d/www.conf
# s6 overlay setup
mkdir -p /etc/s6-overlay/s6-rc.d/{SetupOneshot,crond/dependencies.d,php-fpm/dependencies.d,nginx/dependencies.d,$APP_NAME/dependencies.d}
echo "oneshot" > /etc/s6-overlay/s6-rc.d/SetupOneshot/type
echo "longrun" > /etc/s6-overlay/s6-rc.d/crond/type
echo "longrun" > /etc/s6-overlay/s6-rc.d/php-fpm/type
echo "longrun" > /etc/s6-overlay/s6-rc.d/nginx/type
echo "longrun" > /etc/s6-overlay/s6-rc.d/$APP_NAME/type
echo -e "${INSTALL_DIR}/dockerfiles/init.sh" > /etc/s6-overlay/s6-rc.d/SetupOneshot/up
echo -e '#!/bin/execlineb -P
if { echo
"
[INSTALL] Starting crond service...
" }' > /etc/s6-overlay/s6-rc.d/crond/run
echo -e "/usr/sbin/crond -f" >> /etc/s6-overlay/s6-rc.d/crond/run
echo -e "#!/bin/execlineb -P\n/usr/sbin/php-fpm83 -F" > /etc/s6-overlay/s6-rc.d/php-fpm/run
echo -e '#!/bin/execlineb -P\nnginx -g "daemon off;"' > /etc/s6-overlay/s6-rc.d/nginx/run
echo -e '#!/bin/execlineb -P
with-contenv
importas -u PORT PORT
if { echo
"
[INSTALL] 🚀 Starting app (:${PORT})
" }' > /etc/s6-overlay/s6-rc.d/$APP_NAME/run
echo -e "python ${INSTALL_DIR}/server" >> /etc/s6-overlay/s6-rc.d/$APP_NAME/run
touch /etc/s6-overlay/s6-rc.d/user/contents.d/{SetupOneshot,crond,php-fpm,nginx,$APP_NAME} /etc/s6-overlay/s6-rc.d/{crond,php-fpm,nginx,$APP_NAME}/dependencies.d/SetupOneshot
touch /etc/s6-overlay/s6-rc.d/nginx/dependencies.d/php-fpm
touch /etc/s6-overlay/s6-rc.d/$APP_NAME/dependencies.d/nginx
# this removes the current file
rm -f $0

View File

@@ -1,4 +1,4 @@
# NetAlertX API Documentation
# API Documentation
This API provides programmatic access to **devices, events, sessions, metrics, network tools, and sync** in NetAlertX. It is implemented as a **REST and GraphQL server**. All requests require authentication via **API Token** (`API_TOKEN` setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a `Authorization: Bearer API_TOKEN` header as per example below:
@@ -59,11 +59,14 @@ http://<server>:<GRAPHQL_PORT>/
* [Events](API_EVENTS.md) Device event logging and management
* [Sessions](API_SESSIONS.md) Connection sessions and history
* [Settings](API_SETTINGS.md) Settings
* Messaging:
* [In app messaging](API_MESSAGING_IN_APP.md) - In-app messaging
* [Metrics](API_METRICS.md) Prometheus metrics and per-device status
* [Network Tools](API_NETTOOLS.md) Utilities like Wake-on-LAN, traceroute, nslookup, nmap, and internet info
* [Online History](API_ONLINEHISTORY.md) Online/offline device records
* [GraphQL](API_GRAPHQL.md) Advanced queries and filtering
* [GraphQL](API_GRAPHQL.md) Advanced queries and filtering for Devices, Settings and Language Strings
* [Sync](API_SYNC.md) Synchronization between multiple NetAlertX instances
* [Logs](API_LOGS.md) Purging of logs and adding to the event execution queue for user triggered events
* [DB query](API_DBQUERY.md) (⚠ Internal) - Low level database access - use other endpoints if possible
See [Testing](API_TESTS.md) for example requests and usage.

View File

@@ -1,9 +1,10 @@
# GraphQL API Endpoint
GraphQL queries are **read-optimized for speed**. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allows you to access the following objects:
GraphQL queries are **read-optimized for speed**. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:
- Devices
- Settings
* Devices
* Settings
* Language Strings (LangStrings)
## Endpoints
@@ -190,11 +191,74 @@ curl 'http://host:GRAPHQL_PORT/graphql' \
}
```
---
## LangStrings Query
The **LangStrings query** provides access to localized strings. Supports filtering by `langCode` and `langStringKey`. If the requested string is missing or empty, you can optionally fallback to `en_us`.
### Sample Query
```graphql
query GetLangStrings {
langStrings(langCode: "de_de", langStringKey: "settings_other_scanners") {
langStrings {
langCode
langStringKey
langStringText
}
count
}
}
```
### Query Parameters
| Parameter | Type | Description |
| ---------------- | ------- | ---------------------------------------------------------------------------------------- |
| `langCode` | String | Optional language code (e.g., `en_us`, `de_de`). If omitted, all languages are returned. |
| `langStringKey` | String | Optional string key to retrieve a specific entry. |
| `fallback_to_en` | Boolean | Optional (default `true`). If `true`, empty or missing strings fallback to `en_us`. |
### `curl` Example
```sh
curl 'http://host:GRAPHQL_PORT/graphql' \
-X POST \
-H 'Authorization: Bearer API_TOKEN' \
-H 'Content-Type: application/json' \
--data '{
"query": "query GetLangStrings { langStrings(langCode: \"de_de\", langStringKey: \"settings_other_scanners\") { langStrings { langCode langStringKey langStringText } count } }"
}'
```
### Sample Response
```json
{
"data": {
"langStrings": {
"count": 1,
"langStrings": [
{
"langCode": "de_de",
"langStringKey": "settings_other_scanners",
"langStringText": "Other, non-device scanner plugins that are currently enabled." // falls back to en_us if empty
}
]
}
}
}
```
---
## Notes
* Device and settings queries can be combined in one request since GraphQL supports batching.
* Device, settings, and LangStrings queries can be combined in **one request** since GraphQL supports batching.
* The `fallback_to_en` feature ensures UI always has a value even if a translation is missing.
* Data is **cached in memory** per JSON file; changes to language or plugin files will only refresh after the cache detects a file modification.
* The `setOverriddenByEnv` flag helps identify setting values that are locked at container runtime.
* The schema is **read-only** — updates must be performed through other APIs or configuration management. See the other [API](API.md) endpoints for details.

179
docs/API_LOGS.md Normal file
View File

@@ -0,0 +1,179 @@
# Logs API Endpoints
Manage or purge application log files stored under `/app/log` and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.
Only specific, pre-approved log files can be purged for security and stability reasons.
---
## Delete (Purge) a Log File
* **DELETE** `/logs?file=<log_file>` → Purge the contents of an allowed log file.
**Query Parameter:**
* `file` → The name of the log file to purge (e.g., `app.log`, `stdout.log`)
**Allowed Files:**
```
app.log
app_front.log
IP_changes.log
stdout.log
stderr.log
app.php_errors.log
execution_queue.log
db_is_locked.log
```
**Authorization:**
Requires a valid API token in the `Authorization` header.
---
### `curl` Example (Success)
```sh
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \
-H 'Authorization: Bearer <API_TOKEN>' \
-H 'Accept: application/json'
```
**Response:**
```json
{
"success": true,
"message": "[clean_log] File app.log purged successfully"
}
```
---
### `curl` Example (Not Allowed)
```sh
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=not_allowed.log' \
-H 'Authorization: Bearer <API_TOKEN>' \
-H 'Accept: application/json'
```
**Response:**
```json
{
"success": false,
"message": "[clean_log] File not_allowed.log is not allowed to be purged"
}
```
---
### `curl` Example (Unauthorized)
```sh
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \
-H 'Accept: application/json'
```
**Response:**
```json
{
"error": "Forbidden"
}
```
---
## Add an Action to the Execution Queue
* **POST** `/logs/add-to-execution-queue` → Add a system action to the execution queue.
**Request Body (JSON):**
```json
{
"action": "update_api|devices"
}
```
**Authorization:**
Requires a valid API token in the `Authorization` header.
---
### `curl` Example (Success)
The below will update the API cache for Devices
```sh
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
-H 'Authorization: Bearer <API_TOKEN>' \
-H 'Content-Type: application/json' \
--data '{"action": "update_api|devices"}'
```
**Response:**
```json
{
"success": true,
"message": "[UserEventsQueueInstance] Action \"update_api|devices\" added to the execution queue."
}
```
---
### `curl` Example (Missing Parameter)
```sh
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
-H 'Authorization: Bearer <API_TOKEN>' \
-H 'Content-Type: application/json' \
--data '{}'
```
**Response:**
```json
{
"success": false,
"message": "Missing parameters",
"error": "Missing required 'action' field in JSON body"
}
```
---
### `curl` Example (Unauthorized)
```sh
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
-H 'Content-Type: application/json' \
--data '{"action": "update_api|devices"}'
```
**Response:**
```json
{
"error": "Forbidden"
}
```
---
## Notes
* Only predefined files in `/app/log` can be purged — arbitrary paths are **not permitted**.
* When a log file is purged:
* Its content is replaced with a short marker text: `"File manually purged"`.
* A backend log entry is created via `mylog()`.
* A frontend notification is generated via `write_notification()`.
* Execution queue actions are appended to `execution_queue.log` and can be processed asynchronously by background tasks or workflows.
* Unauthorized or invalid attempts are safely logged and rejected.
* For advanced log retrieval, analysis, or structured querying, use the frontend log viewer.
* Always ensure that sensitive or production logs are handled carefully — purging cannot be undone.

173
docs/API_MESSAGING_IN_APP.md Executable file
View File

@@ -0,0 +1,173 @@
# In-app Notifications API
Manage in-app notifications for users. Notifications can be written, retrieved, marked as read, or deleted.
---
### Write Notification
* **POST** `/messaging/in-app/write` → Create a new in-app notification.
**Request Body:**
```json
{
"content": "This is a test notification",
"level": "alert" // optional, ["interrupt","info","alert"] default: "alert"
}
```
**Response:**
```json
{
"success": true
}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/write" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"content": "This is a test notification",
"level": "alert"
}'
```
---
### Get Unread Notifications
* **GET** `/messaging/in-app/unread` → Retrieve all unread notifications.
**Response:**
```json
[
{
"timestamp": "2025-10-10T12:34:56",
"guid": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"read": 0,
"level": "alert",
"content": "This is a test notification"
}
]
```
#### `curl` Example
```bash
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/unread" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Mark All Notifications as Read
* **POST** `/messaging/in-app/read/all` → Mark all notifications as read.
**Response:**
```json
{
"success": true
}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/all" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Mark Single Notification as Read
* **POST** `/messaging/in-app/read/<guid>` → Mark a single notification as read using its GUID.
**Response (success):**
```json
{
"success": true
}
```
**Response (failure):**
```json
{
"success": false,
"error": "Notification not found"
}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Delete All Notifications
* **DELETE** `/messaging/in-app/delete` → Remove all notifications from the system.
**Response:**
```json
{
"success": true
}
```
#### `curl` Example
```bash
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Delete Single Notification
* **DELETE** `/messaging/in-app/delete/<guid>` → Remove a single notification by its GUID.
**Response (success):**
```json
{
"success": true
}
```
**Response (failure):**
```json
{
"success": false,
"error": "Notification not found"
}
```
#### `curl` Example
```bash
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```

View File

@@ -52,7 +52,7 @@ query GetDevices($options: PageQueryOptionsInput) {
}
```
See also: [Debugging GraphQL issues](./DEBUG_GRAPHQL.md)
See also: [Debugging GraphQL issues](./DEBUG_API_SERVER.md)
### `curl` Command
@@ -141,7 +141,7 @@ The endpoints are updated when objects in the API endpoints are changed.
### Location of the endpoints
In the container, these files are located under the `/app/api/` folder. You can access them via the `/php/server/query_json.php?file=user_notifications.json` endpoint.
In the container, these files are located under the API directory (default: `/tmp/api/`, configurable via `NETALERTX_API` environment variable). You can access them via the `/php/server/query_json.php?file=user_notifications.json` endpoint.
### Available endpoints
@@ -332,7 +332,7 @@ Grafana template sample: [Download json](./samples/API/Grafana_Dashboard.json)
## API Endpoint: /log files
This API endpoint retrieves files from the `/app/log` folder.
This API endpoint retrieves files from the `/tmp/log` folder.
- Endpoint URL: `php/server/query_logs.php?file=<file name>`
- Host: `same as front end (web ui)`
@@ -357,7 +357,7 @@ This API endpoint retrieves files from the `/app/log` folder.
## API Endpoint: /config files
To retrieve files from the `/app/config` folder.
To retrieve files from the `/data/config` folder.
- Endpoint URL: `php/server/query_config.php?file=<file name>`
- Host: `same as front end (web ui)`

View File

@@ -1,90 +1,162 @@
# Backing things up
# Backing Things Up
> [!NOTE]
> To backup 99% of your configuration backup at least the `/app/config` folder. Please read the whole page (or at least "Scenario 2: Corrupted database") for details.
> Note that database definitions might change over time. The safest way is to restore your older backups into the **same version** of the app they were taken from and then gradually upgarde between releases to the latest version.
> To back up 99% of your configuration, back up at least the `/data/config` folder.
> Database definitions can change between releases, so the safest method is to restore backups using the **same app version** they were taken from, then upgrade incrementally.
There are 4 artifacts that can be used to backup the application:
---
| File | Description | Limitations |
|-----------------------|-------------------------------|-------------------------------|
| `/db/app.db` | Database file(s) | The database file might be in an uncommitted state or corrupted |
| `/config/app.conf` | Configuration file | Can be overridden with the [`APP_CONF_OVERRIDE` env variable](https://github.com/jokob-sk/NetAlertX/tree/main/dockerfiles#docker-environment-variables). |
| `/config/devices.csv` | CSV file containing device information | Doesn't contain historical data |
| `/config/workflows.json` | A JSON file containing your workflows | N/A |
## What to Back Up
There are four key artifacts you can use to back up your NetAlertX configuration:
## Backup strategies
| File | Description | Limitations |
| ------------------------ | ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| `/db/app.db` | The application database | Might be in an uncommitted state or corrupted |
| `/config/app.conf` | Configuration file | Can be overridden using the [`APP_CONF_OVERRIDE`](https://github.com/jokob-sk/NetAlertX/tree/main/dockerfiles#docker-environment-variables) variable |
| `/config/devices.csv` | CSV file containing device data | Does not include historical data |
| `/config/workflows.json` | JSON file containing your workflows | N/A |
The safest approach to backups is to backup everything, by taking regular file system backups of the `/db` and `/config` folders (I use [Kopia](https://github.com/kopia/kopia)).
---
Arguably, the most time is spent setting up the device list, so if only one file is kept I'd recommend to have a latest backup of the `devices_<timestamp>.csv` or `devices.csv` file, followed by the `app.conf` and `workflows.json` files. You can also download `app.conf` and `devices.csv` file in the Maintenance section:
## Where the Data Lives
![Backup and Restore Section in Maintenance](./img/BACKUPS/Maintenance_Backup_Restore.png)
### Scenario 1: Full backup
End-result: Full restore
#### 💾 Source artifacts:
- `/app/db/app.db` (uncorrupted)
- `/app/config/app.conf`
- `/app/config/workflows.json`
#### 📥 Recovery:
To restore the application map the above files as described in the [Setup documentation](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md#docker-paths).
### Scenario 2: Corrupted database
End-result: Partial restore (historical data and some plugin data will be missing)
#### 💾 Source artifacts:
- `/app/config/app.conf`
- `/app/config/devices_<timestamp>.csv` or `/app/config/devices.csv`
- `/app/config/workflows.json`
#### 📥 Recovery:
Even with a corrupted database you can recover what I would argue is 99% of the configuration.
- upload the `app.conf` and `workflows.json` files into the mounted `/app/config/` folder as described in the [Setup documentation](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md#docker-paths).
- rename the `devices_<timestamp>.csv` to `devices.csv` and place it in the `/app/config` folder
- Restore the `devices.csv` backup via the [Maintenance section](./DEVICES_BULK_EDITING.md)
## Data and backup storage
To decide on a backup strategy, check where the data is stored:
Understanding where your data is stored helps you plan your backup strategy.
### Core Configuration
The core application configuration is in the `app.conf` file (See [Settings System](./SETTINGS_SYSTEM.md) for details), such as:
Stored in `/data/config/app.conf`.
This includes settings for:
- Notification settings
- Scanner settings
- Scheduled maintenance settings
- UI configuration
* Notifications
* Scanning
* Scheduled maintenance
* UI preferences
### Core Device Data
(See [Settings System](./SETTINGS_SYSTEM.md) for details.)
The core device data is backed up to the `devices_<timestamp>.csv` or `devices.csv` file via the [CSV Backup `CSVBCKP` Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/csv_backup). This file contains data, such as:
### Device Data
- Device names
- Device icons
- Device network configuration
- Device categorization
- Device custom properties data
Stored in `/data/config/devices_<timestamp>.csv` or `/data/config/devices.csv`, created by the [CSV Backup `CSVBCKP` Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/csv_backup).
Contains:
### Historical data
* Device names, icons, and categories
* Network configuration
* Custom properties
Historical data is stored in the `app.db` database (See [Database overview](./DATABASE.md) for details). This data includes:
### Historical Data
- Plugin objects
- Plugin historical entries
- History of Events, Notifications, Workflow Events
- Presence history
Stored in `/data/db/app.db` (see [Database Overview](./DATABASE.md)).
Contains:
* Plugin data and historical entries
* Event and notification history
* Device presence history
---
## Backup Strategies
The safest approach is to back up **both** the `/db` and `/config` folders regularly. Tools like [Kopia](https://github.com/kopia/kopia) make this simple and efficient.
If you can only keep a few files, prioritize:
1. The latest `devices_<timestamp>.csv` or `devices.csv`
2. `app.conf`
3. `workflows.json`
You can also download the `app.conf` and `devices.csv` files from the **Maintenance** section:
![Backup and Restore Section in Maintenance](./img/BACKUPS/Maintenance_Backup_Restore.png)
---
## Scenario 1: Full Backup and Restore
**Goal:** Full recovery of your configuration and data.
### 💾 What to Back Up
* `/data/db/app.db` (uncorrupted)
* `/data/config/app.conf`
* `/data/config/workflows.json`
### 📥 How to Restore
Map these files into your container as described in the [Setup documentation](./DOCKER_INSTALLATION.md).
---
## Scenario 2: Corrupted Database
**Goal:** Recover configuration and device data when the database is lost or corrupted.
### 💾 What to Back Up
* `/data/config/app.conf`
* `/data/config/workflows.json`
* `/data/config/devices_<timestamp>.csv` (rename to `devices.csv` during restore)
### 📥 How to Restore
1. Copy `app.conf` and `workflows.json` into `/data/config/`
2. Rename and place `devices_<timestamp>.csv``/data/config/devices.csv`
3. Restore via the **Maintenance** section under *Devices → Bulk Editing*
This recovers nearly all configuration, workflows, and device metadata.
---
## Docker-Based Backup and Restore
For users running NetAlertX via Docker, you can back up or restore directly from your host system — a convenient and scriptable option.
### Full Backup (File-Level)
1. **Stop the container:**
```bash
docker stop netalertx
```
2. **Create a compressed archive** of your configuration and database volumes:
```bash
docker run --rm -v local_path/config:/config -v local_path/db:/db alpine tar -cz /config /db > netalertx-backup.tar.gz
```
3. **Restart the container:**
```bash
docker start netalertx
```
### Restore from Backup
1. **Stop the container:**
```bash
docker stop netalertx
```
2. **Restore from your backup file:**
```bash
docker run --rm -i -v local_path/config:/config -v local_path/db:/db alpine tar -C / -xz < netalertx-backup.tar.gz
```
3. **Restart the container:**
```bash
docker start netalertx
```
> This approach uses a temporary, minimal `alpine` container to access Docker-managed volumes. The `tar` command creates or extracts an archive directly from your hosts filesystem, making it fast, clean, and reliable for both automation and manual recovery.
---
## Summary
* Back up `/data/config` for configuration and devices; `/data/db` for history
* Keep regular backups, especially before upgrades
* For Docker setups, use the lightweight `alpine`-based backup method for consistency and portability

82
docs/BUILDS.md Normal file
View File

@@ -0,0 +1,82 @@
# NetAlertX Builds: Choose Your Path
NetAlertX provides different installation methods for different needs. This guide helps you choose the right path for security, experimentation, or development.
## 1. Hardened Appliance (Default Production)
> [!NOTE]
> Use this image if: You want to use NetAlertX securely.
### Who is this for?
All users who want a stable, secure, "set-it-and-forget-it" appliance.
### Methodology
- Multi-stage Alpine build
- Aggressively "amputated"
- Locked down for max security
### Source
`Dockerfile (hardened target)`
## 2. "Tinkerer's" Image (Insecure VM-Style)
> [!NOTE]
> Use this image if: You want to experiment with NetAlertX.
### Who is this for?
Power users, developers, and "tinkerers" wanting a familiar "VM-like" experience.
### Methodology
- Traditional Debian build
- Includes full un-hardened OS
- Contains `apt`, `sudo`, `git`
### Source
`Dockerfile.debian`
## 3. Contributor's Devcontainer (Project Developers)
> [!NOTE]
> Use this image if: You want to develop NetAlertX itself.
### Who is this for?
Project contributors who are actively writing and debugging code for NetAlertX.
### Methodology
- Builds `FROM runner` stage
- Loaded by VS Code
- Full debug tools: `xdebug`, `pytest`
### Source
`Dockerfile (devcontainer target)`
# Visualizing the Trade-Offs
This chart compares the three builds across key attributes. A higher score means "more of" that attribute. Notice the clear trade-offs between security and development features.
![tradeoffs](./img/BUILDS/build_images_options_tradeoffs.png)
# Build Process & Origins
The final images originate from two different files and build paths. The main `Dockerfile` uses stages to create *both* the hardened and development container images.
## Official Build Path
Dockerfile -> builder (Stage 1) -> runner (Stage 2) -> hardened (Final Stage) (Production Image) + devcontainer (Final Stage) (Developer Image)
## Legacy Build Path
Dockerfile.debian -> "Tinkerer's" Image (Insecure VM-Style Image)

View File

@@ -1,57 +1,114 @@
### Loading...
# Troubleshooting Common Issues
Often if the application is misconfigured the `Loading...` dialog is continuously displayed. This is most likely caused by the backed failing to start. The **Maintenance -> Logs** section should give you more details on what's happening. If there is no exception, check the Portainer log, or start the container in the foreground (without the `-d` parameter) to observe any exceptions. It's advisable to enable `trace` or `debug`. Check the [Debug tips](./DEBUG_TIPS.md) on detailed instructions.
> [!TIP]
> Before troubleshooting, ensure you have set the correct [Debugging and LOG_LEVEL](./DEBUG_TIPS.md).
### Incorrect SCAN_SUBNETS
---
One of the most common issues is not configuring `SCAN_SUBNETS` correctly. If this setting is misconfigured you will only see one or two devices in your devices list after a scan. Please read the [subnets docs](./SUBNETS.md) carefully to resolve this.
## Docker Container Doesn't Start
### Duplicate devices and notifications
The app uses the MAC address as an unique identifier for devices. If a new MAC is detected a new device is added to the application and corresponding notifications are triggered. This means that if the MAC of an existing device changes, the device will be logged as a new device. You can usually prevent this from happening by changing the device configuration (in Android, iOS, or Windows) for your network. See the [Random Macs](./RANDOM_MAC.md) guide for details.
Initial setup issues are often caused by **missing permissions** or **incorrectly mapped volumes**. Always double-check your `docker run` or `docker-compose.yml` against the [official setup guide](./DOCKER_INSTALLATION.md) before proceeding.
### Permissions
Make sure you [File permissions](./FILE_PERMISSIONS.md) are set correctly.
Make sure your [file permissions](./FILE_PERMISSIONS.md) are correctly set:
* If facing issues (AJAX errors, can't write to DB, empty screen, etc,) make sure permissions are set correctly, and check the logs under `/app/log`.
* To solve permission issues you can try setting the owner and group of the `app.db` by executing the following on the host system: `docker exec netalertx chown -R www-data:www-data /app/db/app.db`.
* If still facing issues, try to map the app.db file (⚠ not folder) to `:/app/db/app.db` (see [docker-compose Examples](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md#-docker-composeyml-examples) for details)
* If you encounter AJAX errors, cannot write to the database, or see an empty screen, check that permissions are correct and review the logs under `/tmp/log`.
* To fix permission issues with the database, update the owner and group of `app.db` as described in the [File Permissions guide](./FILE_PERMISSIONS.md).
### Container restarts / crashes
### Container Restarts / Crashes
* Check the logs for details. Often a required setting for a notification method is missing.
* Check the logs for details. Often, required settings are missing.
* For more detailed troubleshooting, see [Debug and Troubleshooting Tips](./DEBUG_TIPS.md).
* To observe errors directly, run the container in the foreground instead of `-d`:
### unable to resolve host
```bash
docker run --rm -it <your_image>
```
* Check that your `SCAN_SUBNETS` variable is using the correct mask and `--interface`. See the [subnets docs for details](./SUBNETS.md).
---
### Invalid JSON
## Docker Container Starts, But the Application Misbehaves
Check the [Invalid JSON errors debug help](./DEBUG_INVALID_JSON.md) docs on how to proceed.
If the container starts but the app shows unexpected behavior, the cause is often **data corruption**, **incorrect configuration**, or **unexpected input data**.
### sudo execution failing (e.g.: on arpscan) on a Raspberry Pi 4
### Continuous "Loading..." Screen
> sudo: unexpected child termination condition: 0
A misconfigured application may display a persistent `Loading...` dialog. This is usually caused by the backend failing to start.
Resolution based on [this issue](https://github.com/linuxserver/docker-papermerge/issues/4#issuecomment-1003657581)
**Steps to troubleshoot:**
1. Check **Maintenance → Logs** for exceptions.
2. If no exception is visible, check the Portainer logs.
3. Start the container in the foreground to observe exceptions.
4. Enable `trace` or `debug` logging for detailed output (see [Debug Tips](./DEBUG_TIPS.md)).
5. Verify that `GRAPHQL_PORT` is correctly configured.
6. Check browser logs (press `F12`):
* **Console tab** → refresh the page
* **Network tab** → refresh the page
If you are unsure how to resolve errors, provide screenshots or log excerpts in your issue report or Discord discussion.
---
### Common Configuration Issues
#### Incorrect `SCAN_SUBNETS`
If `SCAN_SUBNETS` is misconfigured, you may see only a few devices in your device list after a scan. See the [Subnets Documentation](./SUBNETS.md) for proper configuration.
#### Duplicate Devices and Notifications
* Devices are identified by their **MAC address**.
* If a device's MAC changes, it will be treated as a new device, triggering notifications.
* Prevent this by adjusting your device configuration for Android, iOS, or Windows. See the [Random MACs Guide](./RANDOM_MAC.md).
#### Unable to Resolve Host
* Ensure `SCAN_SUBNETS` uses the correct mask and `--interface`.
* Refer to the [Subnets Documentation](./SUBNETS.md) for detailed guidance.
#### Invalid JSON Errors
* Follow the steps in [Invalid JSON Errors Debug Help](./DEBUG_INVALID_JSON.md).
#### Sudo Execution Fails (e.g., on arpscan on Raspberry Pi 4)
Error:
```
sudo: unexpected child termination condition: 0
```
**Resolution**:
```bash
wget ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.3-2_armhf.deb
sudo dpkg -i libseccomp2_2.5.3-2_armhf.deb
```
The link above will probably break in time too. Go to https://packages.debian.org/sid/armhf/libseccomp2/download to find the new version number and put that in the url.
> ⚠️ The link may break over time. Check [Debian Packages](https://packages.debian.org/sid/armhf/libseccomp2/download) for the latest version.
### Only Router and own device show up
#### Only Router and Own Device Show Up
Make sure that the subnet and interface in `SCAN_SUBNETS` are correct. If your device/NAS has multiple ethernet ports, you probably need to change `eth0` to something else.
* Verify the subnet and interface in `SCAN_SUBNETS`.
* On devices with multiple Ethernet ports, you may need to change `eth0` to the correct interface.
### Losing my settings and devices after an update
#### Losing Settings or Devices After Update
If you lose your devices and/or settings after an update that means you don't have the `/app/db` and `/app/config` folders mapped to a permanent storage. That means every time you update these folders are re-created. Make sure you have the [volumes specified correctly](./DOCKER_COMPOSE.md) in your `docker-compose.yml` or run command.
* Ensure `/data/db` and `/data/config` are mapped to persistent storage.
* Without persistent volumes, these folders are recreated on every update.
* See [Docker Volumes Setup](./DOCKER_COMPOSE.md) for proper configuration.
#### Application Performance Issues
### The application is slow
Slowness can be caused by:
* Incorrect settings (causing app restarts) → check `app.log`.
* Too many background processes → disable unnecessary scanners.
* Long scans → limit the number of scanned devices.
* Excessive disk operations or failing maintenance plugins.
> See [Performance Tips](./PERFORMANCE.md) for detailed optimization steps.
Slowness is usually caused by incorrect settings (the app might restart, so check the `app.log`), too many background processes (disable unnecessary scanners), too long scans (limit the number of scanned devices), too many disk operations, or some maintenance plugins might have failed. See the [Performance tips](./PERFORMANCE.md) docs for details.

21
docs/DEBUG_GRAPHQL.md → docs/DEBUG_API_SERVER.md Executable file → Normal file
View File

@@ -1,35 +1,34 @@
# Debugging GraphQL server issues
The GraphQL server is an API middle layer, running on it's own port specified by `GRAPHQL_PORT`, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the [API documentation](./API.md) for details.
The GraphQL server is an API middle layer, running on it's own port specified by `GRAPHQL_PORT`, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the [API documentation](./API.md) for details.
The most common issue is that the GraphQL server doesn't start properly, usually due to a **port conflict**. If you are running multiple NetAlertX instances, make sure to use **unique ports** by changing the `GRAPHQL_PORT` setting. The default is `20212`.
## How to update the `GRAPHQL_PORT` in case of issues
As a first troubleshooting step try changing the default `GRAPHQL_PORT` setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.
As a first troubleshooting step try changing the default `GRAPHQL_PORT` setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.
### Updating the setting via the Settings UI
Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:
![GrapQL settings](./img/DEBUG_GRAPHQL/graphql_settings_port_token.png)
![GrapQL settings](./img/DEBUG_API_SERVER/graphql_settings_port_token.png)
You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The `API_TOKEN` is used to authenticate any API calls, including GraphQL requests.
You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The `API_TOKEN` is used to authenticate any API calls, including GraphQL requests.
### Updating the `app.conf` file
If the UI is not accessible, you can directly edit the `app.conf` file in your `/config` folder:
![Editing app.conf](./img/DEBUG_GRAPHQL/app_conf_graphql_port.png)
![Editing app.conf](./img/DEBUG_API_SERVER/app_conf_graphql_port.png)
### Using a docker variable
All application settings can also be initialized via the `APP_CONF_OVERRIDE` docker env variable.
All application settings can also be initialized via the `APP_CONF_OVERRIDE` docker env variable.
```yaml
...
environment:
- TZ=Europe/Berlin
- PORT=20213
- APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"}
...
@@ -43,22 +42,22 @@ There are several ways to check if the GraphQL server is running.
You can navigate to Maintenance -> Init Check to see if `isGraphQLServerRunning` is ticked:
![Init Check](./img/DEBUG_GRAPHQL/Init_check.png)
![Init Check](./img/DEBUG_API_SERVER/Init_check.png)
### Checking the Logs
You can navigate to Maintenance -> Logs and search for `graphql` to see if it started correctly and serving requests:
![GraphQL Logs](./img/DEBUG_GRAPHQL/graphql_running_logs.png)
![GraphQL Logs](./img/DEBUG_API_SERVER/graphql_running_logs.png)
### Inspecting the Browser console
In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).
![Browser Network Tab](./img/DEBUG_GRAPHQL/network_graphql.png)
![Browser Network Tab](./img/DEBUG_API_SERVER/network_graphql.png)
You can then inspect any of the POST requests by opening them in a new tab.
![Browser GraphQL Json](./img/DEBUG_GRAPHQL/dev_console_graphql_json.png)
![Browser GraphQL Json](./img/DEBUG_API_SERVER/dev_console_graphql_json.png)

View File

@@ -3,13 +3,13 @@
Check the the HTTP response of the failing backend call by following these steps:
- Open developer console in your browser (usually, e. g. for Chrome, key F12 on the keyboard).
- Follow the steps in this screenshot:
- Follow the steps in this screenshot:
![F12DeveloperConsole][F12DeveloperConsole]
- Copy the URL causing the error and enter it in the address bar of your browser directly and hit enter. The copied URLs could look something like this (notice the query strings at the end):
- `http://<NetAlertX URL>:20211/api/table_devices.json?nocache=1704141103121`
- `http://<NetAlertX URL>:20211/php/server/devices.php?action=getDevicesTotals`
- `http://<server>:20211/api/table_devices.json?nocache=1704141103121`
- `http://<server>:20211/php/server/devices.php?action=getDevicesTotals`
- Post the error response in the existing issue thread on GitHub or create a new issue and include the redacted response of the failing query.

View File

@@ -27,7 +27,7 @@ Sometimes, the UI might not be accessible. In that case, you can access the logs
3. **Check the PHP application error log:**
```bash
cat /app/log/app.php_errors.log
cat /tmp/log/app.php_errors.log
```
These logs will help identify syntax issues, fatal errors, or startup problems when the UI fails to load properly.

View File

@@ -1,5 +1,8 @@
# Troubleshooting plugins
> [!TIP]
> Before troubleshooting, please ensure you have the right [Debugging and LOG_LEVEL set](./DEBUG_TIPS.md).
## High-level overview
If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the `last_result.log` file in the plugin log folder (`app/log/plugins/`).
@@ -9,7 +12,7 @@ For a more in-depth overview on how plugins work check the [Plugins development
### Prerequisites
- Make sure you read and followed the specific plugin setup instructions.
- Ensure you have [debug enabled (see More Logging)](./DEBUG_TIPS.md)
- Ensure you have [debug enabled (see More Logging)](./DEBUG_TIPS.md)
### Potential issues
@@ -47,9 +50,9 @@ Input data from the plugin might cause mapping issues in specific edge cases. Lo
17:31:05 [Plugins] history_to_insert count: 4
17:31:05 [Plugins] objects_to_insert count: 0
17:31:05 [Plugins] objects_to_update count: 4
17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status "watched-not-changed"
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "missing-in-last-scan"
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "watched-not-changed"
17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status "watched-not-changed"
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "missing-in-last-scan"
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "watched-not-changed"
17:31:05 [Plugins] Mapping objects to database table: CurrentScan
17:31:05 [Plugins] SQL query for mapping: INSERT into CurrentScan ( "cur_MAC", "cur_IP", "cur_LastQuery", "cur_Name", "cur_Vendor", "cur_ScanMethod") VALUES ( ?, ?, ?, ?, ?, ?)
17:31:05 [Plugins] SQL sqlParams for mapping: [('01:01:01:01:01:01', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'PIHOLE'), ('02:42:ac:1e:00:02', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'PIHOLE')]
@@ -80,7 +83,7 @@ These values, if formatted correctly, will also show up in the UI:
### Sharing application state
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
1. Please set `LOG_LEVEL` to `trace` (Disable it once you have the info as this produces big log files).
2. Wait for the issue to occur.

View File

@@ -1,30 +1,36 @@
# Debugging and troubleshooting
Please follow tips 1 - 4 to get a more detailed error.
Please follow tips 1 - 4 to get a more detailed error.
## 1. More Logging
## 1. More Logging
When debugging an issue always set the highest log level:
`LOG_LEVEL='trace'`
## 2. Surfacing errors when container restarts
## 2. Surfacing errors when container restarts
Start the container via the **terminal** with a command similar to this one:
```bash
docker run --rm --network=host \
-v local/path/netalertx/config:/app/config \
-v local/path/netalertx/db:/app/db \
-e TZ=Europe/Berlin \
docker run \
--network=host \
--restart unless-stopped \
-v /local_data_dir:/data \
-v /etc/localtime:/etc/localtime:ro \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
-e PORT=20211 \
-e APP_CONF_OVERRIDE='{"GRAPHQL_PORT":"20214"}' \
ghcr.io/jokob-sk/netalertx:latest
```
> ⚠ Please note, don't use the `-d` parameter so you see the error when the container crashes. Use this error in your issue description.
Note: Your `/local_data_dir` should contain a `config` and `db` folder.
## 3. Check the _dev image and open issues
> [!NOTE]
> ⚠ The most important part is NOT to use the `-d` parameter so you see the error when the container crashes. Use this error in your issue description.
## 3. Check the _dev image and open issues
If possible, check if your issue got fixed in the `_dev` image before opening a new issue. The container is:
@@ -34,7 +40,7 @@ If possible, check if your issue got fixed in the `_dev` image before opening a
Please also search [open issues](https://github.com/jokob-sk/NetAlertX/issues).
## 4. Disable restart behavior
## 4. Disable restart behavior
To prevent a Docker container from automatically restarting in a Docker Compose file, specify the restart policy as `no`:
@@ -48,9 +54,14 @@ services:
# Other service configurations...
```
## 5. Sharing application state
## 5. TMP mount directories to rule host out permission issues
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server. See teh [Permissions guide](./FILE_PERMISSIONS.md) for details.
## 6. Sharing application state
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
1. Please set `LOG_LEVEL` to `trace` (Disable it once you have the info as this produces big log files).
2. Wait for the issue to occur.
@@ -61,4 +72,4 @@ Sometimes specific log sections are needed to debug issues. The Devices and Curr
## Common issues
See [Common issues](./COMMON_ISSUES.md) for details.
See [Common issues](./COMMON_ISSUES.md) for additional troubleshooting tips.

View File

@@ -4,8 +4,8 @@ NetAlertX allows you to mass-edit devices via a CSV export and import feature, o
## UI multi edit
> [!NOTE]
> Make sure you have your backups saved and restorable before doing any mass edits. Check [Backup strategies](./BACKUPS.md).
> [!NOTE]
> Make sure you have your backups saved and restorable before doing any mass edits. Check [Backup strategies](./BACKUPS.md).
You can select devices in the _Devices_ view by selecting devices to edit and then clicking the _Multi-edit_ button or via the _Maintenance_ > _Multi-Edit_ section.
@@ -16,23 +16,23 @@ You can select devices in the _Devices_ view by selecting devices to edit and th
The database and device structure may change with new releases. When using the CSV import functionality, ensure the format matches what the application expects. To avoid issues, you can first export the devices and review the column formats before importing any custom data.
> [!NOTE]
> [!NOTE]
> As always, backup everything, just in case.
1. In _Maintenance_ > _Backup / Restore_ click the _CSV Export_ button.
1. In _Maintenance_ > _Backup / Restore_ click the _CSV Export_ button.
2. A `devices.csv` is generated in the `/config` folder
3. Edit the `devices.csv` file however you like.
3. Edit the `devices.csv` file however you like.
![Maintenance > CSV Export](./img/DEVICES_BULK_EDITING/MAINTENANCE_CSV_EXPORT.png)
> [!NOTE]
> The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: `<your netalertx url>/php/server/devices.php?action=ExportCSV` or via the `CSV Backup` plugin. (💡 You can schedule this)
> [!NOTE]
> The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: `<server>:20211/php/server/devices.php?action=ExportCSV` or via the `CSV Backup` plugin. (💡 You can schedule this)
![Settings > CSV Backup](./img/DEVICES_BULK_EDITING/CSV_BACKUP_SETTINGS.png)
### File encoding format
> [!NOTE]
> [!NOTE]
> Keep Linux line endings (suggested editors: Nano, Notepad++)
![Nodepad++ line endings](./img/DEVICES_BULK_EDITING/NOTEPAD++.png)

View File

@@ -1,8 +1,8 @@
# NetAlertX - Device Management
# Device Management
The Main Info section is where most of the device identifiable information is stored and edited. Some of the information is autodetected via various plugins. Initial values for most of the fields can be specified in the `NEWDEV` plugin.
> [!NOTE]
> [!NOTE]
>
> You can multi-edit devices by selecting them in the main Devices view, from the Mainetence section, or via the CSV Export functionality under Maintenance. More info can be found in the [Devices Bulk-editing docs](./DEVICES_BULK_EDITING.md).
@@ -14,23 +14,23 @@ The Main Info section is where most of the device identifiable information is st
- **MAC**: MAC addres of the device. Not editable, unless creating a new dummy device.
- **Last IP**: IP addres of the device. Not editable, unless creating a new dummy device.
- **Name**: Friendly device name. Autodetected via various 🆎 Name discovery [plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). The app attaches `(IP match)` if the name is discovered via an IP match and not MAC match which could mean the name could be incorrect as IPs might change.
- **Icon**: Partially autodetected. Select an existing or [add a custom icon](./ICONS.md). You can also auto-apply the same icon on all devices of the same type.
- **Icon**: Partially autodetected. Select an existing or [add a custom icon](./ICONS.md). You can also auto-apply the same icon on all devices of the same type.
- **Owner**: Device owner (The list is self-populated with existing owners and you can add custom values).
- **Type**: Select a device type from the dropdown list (`Smartphone`, `Tablet`,
`Laptop`, `TV`, `router`, etc.) or add a new device type. If you want the device to act as a **Network device** (and be able to be a network node in the Network view), select a type under Network Devices or add a new Network Device type in Settings. More information can be found in the [Network Setup docs](./NETWORK_TREE.md).
`Laptop`, `TV`, `router`, etc.) or add a new device type. If you want the device to act as a **Network device** (and be able to be a network node in the Network view), select a type under Network Devices or add a new Network Device type in Settings. More information can be found in the [Network Setup docs](./NETWORK_TREE.md).
- **Vendor**: The manufacturing vendor. Automatically updated by NetAlertX when empty or unknown, can be edited.
- **Group**: Select a group (`Always on`, `Personal`, `Friends`, etc.) or type
your own Group name.
- **Location**: Select the location, usually a room, where the device is located (`Kitchen`, `Attic`, `Living room`, etc.) or add a custom Location.
- **Location**: Select the location, usually a room, where the device is located (`Kitchen`, `Attic`, `Living room`, etc.) or add a custom Location.
- **Comments**: Add any comments for the device, such as a serial number, or maintenance information.
> [!NOTE]
> [!NOTE]
>
> Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.
> Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.
## Dummy devices
You can create dummy devices from the Devices listing screen.
You can create dummy devices from the Devices listing screen.
![Create Dummy Device](./img/DEVICE_MANAGEMENT/Devices_CreateDummyDevice.png)
@@ -39,12 +39,12 @@ The **MAC** field and the **Last IP** field will then become editable.
![Save Dummy Device](./img/DEVICE_MANAGEMENT/DeviceEdit_SaveDummyDevice.png)
> [!NOTE]
> [!NOTE]
>
> You can couple this with the `ICMP` plugin which can be used to monitor the status of these devices, if they are actual devices reachable with the `ping` command. If not, you can use a loopback IP address so they appear online, such as `0.0.0.0` or `127.0.0.1`.
## Copying data from an existing device.
## Copying data from an existing device.
To speed up device population you can also copy data from an existing device. This can be done from the **Tools** tab on the Device details.
To speed up device population you can also copy data from an existing device. This can be done from the **Tools** tab on the Device details.

View File

@@ -55,7 +55,6 @@ The file content should be following, with your custom values.
#--------------------------------
#NETALERTX
#--------------------------------
TZ=Europe/Berlin
PORT=22222 # make sure this port is unique on your whole network
DEV_LOCATION=/development/NetAlertX
APP_DATA_LOCATION=/volume/docker_appdata

44
docs/DEV_PORTS_HOST_MODE.md Executable file
View File

@@ -0,0 +1,44 @@
# Dev Ports in Host Network Mode
When using `"--network=host"` in the devcontainer, VS Code's normal port forwarding model doesn't apply. All container ports are already on the host network namespace, so:
- Listing ports in `forwardPorts` can cause VS Code to pre-bind or reserve them (conflicts with startup scripts waiting for a free port).
- The PORTS panel will not auto-detect services reliably, because forwarding isn't occurring.
- Debugger ports (e.g. Xdebug `9003`, Python debugpy `5678`) can still be listed safely.
## Recommended Pattern
1. Only include debugger ports in `forwardPorts`:
```jsonc
"forwardPorts": [5678, 9003]
```
2. Do NOT list application service ports (e.g. 20211, 20212) there when in host mode.
3. Use the helper task to enumerate current bindings:
- Run task: `> Tasks: Run Task` → `[Dev Container] List NetAlertX Ports`
## Port Enumeration Script
Script: `scripts/list-ports.sh`
Outputs binding address, PID (if resolvable) and process name for key ports.
You can edit the PORTS variable inside that script to add/remove watched ports.
## Xdebug Notes
Set in `99-xdebug.ini`:
```ini
xdebug.client_host=127.0.0.1
xdebug.client_port=9003
xdebug.discover_client_host=1
```
Ensure your IDE is listening on 9003.
## Troubleshooting
| Symptom | Cause | Fix |
|---------|-------|-----|
| `Waiting for port 20211 to free...` repeats | VS Code pre-bound the port via `forwardPorts` | Remove the port from `forwardPorts`, rebuild, retry |
| PHP request hangs at start | Xdebug trying to connect to unresolved host (`host.docker.internal`) | Use `127.0.0.1` or rely on discovery |
| PORTS panel empty | Expected in host mode | Use the port enumeration task |
## Future Improvements
- Optional: add a small web status endpoint summarizing runtime ports.
- Optional: detect host mode in `setup.sh` and skip the wait loop if the PID using port is the intended process.

View File

@@ -1,203 +1,232 @@
# `docker-compose.yaml` Examples
# NetAlertX and Docker Compose
> [!NOTE]
> The container needs to run in `network_mode:"host"`. This also means that not all functionality is supported on a Windows host as Docker for Windows doesn't support this networking option.
> [!WARNING]
> ⚠️ **Important:** The docker-compose has recently changed. Carefully read the [Migration guide](https://jokob-sk.github.io/NetAlertX/MIGRATION/?h=migrat#12-migration-from-netalertx-v25524) for detailed instructions.
### Example 1
Great care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.Good care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.
> [!NOTE]
> The container needs to run in `network_mode:"host"` to access Layer 2 networking such as arp, nmap and others. Due to lack of support for this feature, Windows host is not a supported operating system.
## Baseline Docker Compose
There is one baseline for NetAlertX. That's the default security-enabled official distribution.
```yaml
services:
netalertx:
container_name: netalertx
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
network_mode: "host"
restart: unless-stopped
#use an environmental variable to set host networking mode if needed
container_name: netalertx # The name when you docker contiainer ls
image: ghcr.io/jokob-sk/netalertx-dev:latest
network_mode: ${NETALERTX_NETWORK_MODE:-host} # Use host networking for ARP scanning and other services
read_only: true # Make the container filesystem read-only
cap_drop: # Drop all capabilities for enhanced security
- ALL
cap_add: # Add only the necessary capabilities
- NET_ADMIN # Required for ARP scanning
- NET_RAW # Required for raw socket operations
- NET_BIND_SERVICE # Required to bind to privileged ports (nbtscan)
volumes:
- local_path/config:/app/config
- local_path/db:/app/db
# (optional) useful for debugging if you have issues setting up the container
- local_path/logs:/app/log
# (API: OPTION 1) use for performance
- type: tmpfs
target: /app/api
# (API: OPTION 2) use when debugging issues
# - local_path/api:/app/api
- type: volume # Persistent Docker-managed named volume for config + database
source: netalertx_data
target: /data # `/data/config` and `/data/db` live inside this mount
read_only: false
# Example custom local folder called /home/user/netalertx_data
# - type: bind
# source: /home/user/netalertx_data
# target: /data
# read_only: false
# ... or use the alternative format
# - /home/user/netalertx_data:/data:rw
- type: bind # Bind mount for timezone consistency
source: /etc/localtime
target: /etc/localtime
read_only: true
# Mount your DHCP server file into NetAlertX for a plugin to access
# - path/on/host/to/dhcp.file:/resources/dhcp.file
# tmpfs mount consolidates writable state for a read-only container and improves performance
# uid=20211 and gid=20211 is the netalertx user inside the container
# mode=1700 grants rwx------ permissions to the netalertx user only
tmpfs:
# Comment out to retain logs between container restarts - this has a server performance impact.
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
# Retain logs - comment out tmpfs /tmp if you want to retain logs between container restarts
# Please note if you remove the /tmp mount, you must create and maintain sub-folder mounts.
# - /path/on/host/log:/tmp/log
# - "/tmp/api:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
# - "/tmp/nginx:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
# - "/tmp/run:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
environment:
- TZ=Europe/Berlin
- PORT=20211
LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0} # Listen for connections on all interfaces
PORT: ${PORT:-20211} # Application port
GRAPHQL_PORT: ${GRAPHQL_PORT:-20212} # GraphQL API port (passed into APP_CONF_OVERRIDE at runtime)
# NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0} # 0=kill all services and restart if any dies. 1 keeps running dead services.
# Resource limits to prevent resource exhaustion
mem_limit: 2048m # Maximum memory usage
mem_reservation: 1024m # Soft memory limit
cpu_shares: 512 # Relative CPU weight for CPU contention scenarios
pids_limit: 512 # Limit the number of processes/threads to prevent fork bombs
logging:
driver: "json-file" # Use JSON file logging driver
options:
max-size: "10m" # Rotate log files after they reach 10MB
max-file: "3" # Keep a maximum of 3 log files
# Always restart the container unless explicitly stopped
restart: unless-stopped
volumes: # Persistent volume for configuration and database storage
netalertx_data:
```
To run the container execute: `sudo docker-compose up -d`
Run or re-run it:
### Example 2
Example by [SeimuS](https://github.com/SeimusS).
```yaml
services:
netalertx:
container_name: NetAlertX
hostname: NetAlertX
privileged: true
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: ghcr.io/jokob-sk/netalertx:latest
environment:
- TZ=Europe/Bratislava
restart: always
volumes:
- ./netalertx/db:/app/db
- ./netalertx/config:/app/config
network_mode: host
```sh
docker compose up --force-recreate
```
To run the container execute: `sudo docker-compose up -d`
### Customize with Environmental Variables
### Example 3
You can override the default settings by passing environmental variables to the `docker compose up` command.
`docker-compose.yml`
**Example using a single variable:**
This command runs NetAlertX on port 8080 instead of the default 20211.
```sh
PORT=8080 docker compose up
```
**Example using all available variables:**
This command demonstrates overriding all primary environmental variables: running with host networking, on port 20211, GraphQL on 20212, and listening on all IPs.
```sh
NETALERTX_NETWORK_MODE=host \
LISTEN_ADDR=0.0.0.0 \
PORT=20211 \
GRAPHQL_PORT=20212 \
NETALERTX_DEBUG=0 \
docker compose up
```
## `docker-compose.yaml` Modifications
### Modification 1: Use a Local Folder (Bind Mount)
By default, the baseline compose file uses a single named volume (netalertx_data) mounted at `/data`. This single-volume layout is preferred because NetAlertX manages both configuration and the database under `/data` (for example, `/data/config` and `/data/db`) via its web UI. Using one named volume simplifies permissions and portability: Docker manages the storage and NetAlertX manages the files inside `/data`.
A two-volume layout that mounts `/data/config` and `/data/db` separately (for example, `netalertx_config` and `netalertx_db`) is supported for backward compatibility and some advanced workflows, but it is an abnormal/legacy layout and not recommended for new deployments.
However, if you prefer to have direct, file-level access to your configuration for manual editing, a "bind mount" is a simple alternative. This tells Docker to use a specific folder from your computer (the "host") inside the container.
**How to make the change:**
1. Choose a location on your computer. For example, `/local_data_dir`.
2. Create the subfolders: `mkdir -p /local_data_dir/config` and `mkdir -p /local_data_dir/db`.
3. Edit your `docker-compose.yml` and find the `volumes:` section (the one *inside* the `netalertx:` service).
4. Comment out (add a `#` in front) or delete the `type: volume` blocks for `netalertx_config` and `netalertx_db`.
5. Add new lines pointing to your local folders.
**Before (Using Named Volumes - Preferred):**
```yaml
...
volumes:
- netalertx_config:/data/config:rw #short-form volume (no /path is a short volume)
- netalertx_db:/data/db:rw
...
```
**After (Using a Local Folder / Bind Mount):**
Make sure to replace `/local_data_dir` with your actual path. The format is `<path_on_your_computer>:<path_inside_container>:<options>`.
```yaml
...
volumes:
# - netalertx_config:/data/config:rw
# - netalertx_db:/data/db:rw
- /local_data_dir/config:/data/config:rw
- /local_data_dir/db:/data/db:rw
...
```
Now, any files created by NetAlertX in `/data/config` will appear in your `/local_data_dir/config` folder.
This same method works for mounting other things, like custom plugins or enterprise NGINX files, as shown in the commented-out examples in the baseline file.
## Example Configuration Summaries
Here are the essential modifications for common alternative setups.
### Example 2: External `.env` File for Paths
This method is useful for keeping your paths and other settings separate from your main compose file, making it more portable.
**`docker-compose.yml` changes:**
```yaml
...
services:
netalertx:
container_name: netalertx
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
network_mode: "host"
restart: unless-stopped
volumes:
- ${APP_CONFIG_LOCATION}/netalertx/config:/app/config
- ${APP_DATA_LOCATION}/netalertx/db/:/app/db/
# (optional) useful for debugging if you have issues setting up the container
- ${LOGS_LOCATION}:/app/log
# (API: OPTION 1) use for performance
- type: tmpfs
target: /app/api
# (API: OPTION 2) use when debugging issues
# - local/path/api:/app/api
environment:
- TZ=${TZ}
- PORT=${PORT}
- GRAPHQL_PORT=${GRAPHQL_PORT}
...
```
`.env` file
**`.env` file contents:**
```yaml
#GLOBAL PATH VARIABLES
APP_DATA_LOCATION=/path/to/docker_appdata
APP_CONFIG_LOCATION=/path/to/docker_config
LOGS_LOCATION=/path/to/docker_logs
#ENVIRONMENT VARIABLES
TZ=Europe/Paris
```sh
PORT=20211
#DEVELOPMENT VARIABLES
DEV_LOCATION=/path/to/local/source/code
NETALERTX_NETWORK_MODE=host
LISTEN_ADDR=0.0.0.0
GRAPHQL_PORT=20212
```
To run the container execute: `sudo docker-compose --env-file /path/to/.env up`
Run with: `sudo docker-compose --env-file /path/to/.env up`
### Example 3: Docker Swarm
### Example 4: Docker swarm
This is for deploying on a Docker Swarm cluster. The key differences from the baseline are the removal of `network_mode:` from the service, and the addition of `deploy:` and `networks:` blocks at both the service and top-level.
Notice how the host network is defined in a swarm setup:
Here are the *only* changes you need to make to the baseline compose file to make it Swarm-compatible.
```yaml
services:
netalertx:
# Use the below line if you want to test the latest dev image
# image: "jokobsk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
volumes:
- /mnt/MYSERVER/netalertx/config:/config:rw
- /mnt/MYSERVER/netalertx/db:/netalertx/db:rw
- /mnt/MYSERVER/netalertx/logs:/netalertx/front/log:rw
environment:
- TZ=Europe/London
- PORT=20211
...
# network_mode: ${NETALERTX_NETWORK_MODE:-host} # <-- DELETE THIS LINE
...
# 2. ADD a 'networks:' block INSIDE the service to connect to the external host network.
networks:
- outside
# 3. ADD a 'deploy:' block to manage the service as a swarm replica.
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
# 4. ADD a new top-level 'networks:' block at the end of the file to define 'outside' as the external 'host' network.
networks:
outside:
external:
name: "host"
```
### Example 5: same as 3 but with a top-level root directory; also works in Portainer as-is
`docker-compose.yml`
```yaml
services:
netalertx:
container_name: netalertx
# use the below line if you want to test the latest dev image instead of the stable release
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
network_mode: "host"
restart: unless-stopped
volumes:
- ${APP_FOLDER}/netalertx/config:/app/config
- ${APP_FOLDER}/netalertx/db:/app/db
# (optional) useful for debugging if you have issues setting up the container
- ${APP_FOLDER}/netalertx/log:/app/log
# (API: OPTION 1) default -> use for performance
- type: tmpfs
target: /app/api
# (API: OPTION 2) use when debugging issues
# - ${APP_FOLDER}/netalertx/api:/app/api
environment:
- TZ=${TZ}
- PORT=${PORT}
- PUID=${PUID}
- PGID=${PGID}
- LISTEN_ADDR=${LISTEN_ADDR}
```
`.env` file
```yaml
APP_FOLDER=/path/to/local/NetAlertX/location
#ENVIRONMENT VARIABLES
PUID=200
PGID=300
TZ=America/New_York
LISTEN_ADDR=0.0.0.0
PORT=20211
#GLOBAL PATH VARIABLE
# you may want to create a dedicated user and group to run the container with
# sudo groupadd -g 300 nax-g
# sudo useradd -u 200 -g 300 nax-u
# mkdir -p $APP_FOLDER/{db,config,log}
# chown -R 200:300 $APP_FOLDER
# chmod -R 775 $APP_FOLDER
# DEVELOPMENT VARIABLES
# you can create multiple env files called .env.dev1, .env.dev2 etc and use them by running:
# docker compose --env-file .env.dev1 up -d
# you can then clone multiple dev copies of NetAlertX just make sure to change the APP_FOLDER and PORT variables in each .env.devX file
```
To run the container execute: `sudo docker-compose --env-file /path/to/.env up`

210
dockerfiles/README.md → docs/DOCKER_INSTALLATION.md Executable file → Normal file
View File

@@ -1,103 +1,107 @@
[![Docker Size](https://img.shields.io/docker/image-size/jokobsk/netalertx?label=Size&logo=Docker&color=0aa8d2&logoColor=fff&style=for-the-badge)](https://hub.docker.com/r/jokobsk/netalertx)
[![Docker Pulls](https://img.shields.io/docker/pulls/jokobsk/netalertx?label=Pulls&logo=docker&color=0aa8d2&logoColor=fff&style=for-the-badge)](https://hub.docker.com/r/jokobsk/netalertx)
[![GitHub Release](https://img.shields.io/github/v/release/jokob-sk/NetAlertX?color=0aa8d2&logoColor=fff&logo=GitHub&style=for-the-badge)](https://github.com/jokob-sk/NetAlertX/releases)
[![Discord](https://img.shields.io/discord/1274490466481602755?color=0aa8d2&logoColor=fff&logo=Discord&style=for-the-badge)](https://discord.gg/NczTUTWyRr)
[![Home Assistant](https://img.shields.io/badge/Repo-blue?logo=home-assistant&style=for-the-badge&color=0aa8d2&logoColor=fff&label=Add)](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
# NetAlertX - Network scanner & notification framework
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
<a href="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" target="_blank">
<img src="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" width="1000px" />
</a>
Head to [https://netalertx.com/](https://netalertx.com/) for more gifs and screenshots 📷.
> [!NOTE]
> There is also an experimental 🧪 [bare-metal install](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md) method available.
## 📕 Basic Usage
> [!WARNING]
> You will have to run the container on the `host` network and specify `SCAN_SUBNETS` unless you use other [plugin scanners](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.
```yaml
docker run -d --rm --network=host \
-v local_path/config:/app/config \
-v local_path/db:/app/db \
--mount type=tmpfs,target=/app/api \
-e PUID=200 -e PGID=300 \
-e TZ=Europe/Berlin \
-e PORT=20211 \
ghcr.io/jokob-sk/netalertx:latest
```
See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md).
### Docker environment variables
| Variable | Description | Example Value |
| :------------- |:------------------------| -----:|
| `PORT` |Port of the web interface | `20211` |
| `PUID` |Application User UID | `102` |
| `PGID` |Application User GID | `82` |
| `LISTEN_ADDR` |Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. | `0.0.0.0` |
|`TZ` |Time zone to display stats correctly. Find your time zone [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) | `Europe/Berlin` |
|`LOADED_PLUGINS` | Default [plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) to load. Plugins cannot be loaded with `APP_CONF_OVERRIDE`, you need to use this variable instead and then specify the plugins settings with `APP_CONF_OVERRIDE`. | `["PIHOLE","ASUSWRT"]` |
|`APP_CONF_OVERRIDE` | JSON override for settings (except `LOADED_PLUGINS`). | `{"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20212"}` |
|`ALWAYS_FRESH_INSTALL` | ⚠ If `true` will delete the content of the `/db` & `/config` folders. For testing purposes. Can be coupled with [watchtower](https://github.com/containrrr/watchtower) to have an always freshly installed `netalertx`/`netalertx-dev` image. | `true` |
> You can override the default GraphQL port setting `GRAPHQL_PORT` (set to `20212`) by using the `APP_CONF_OVERRIDE` env variable. `LOADED_PLUGINS` and settings in `APP_CONF_OVERRIDE` can be specified via the UI as well.
### Docker paths
> [!NOTE]
> See also [Backup strategies](https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md).
| Required | Path | Description |
| :------------- | :------------- | :-------------|
| ✅ | `:/app/config` | Folder which will contain the `app.conf` & `devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
| ✅ | `:/app/db` | Folder which will contain the `app.db` database file |
| | `:/app/log` | Logs folder useful for debugging if you have issues setting up the container |
| | `:/app/api` | A simple [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. |
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
> Use separate `db` and `config` directories, do not nest them.
### Initial setup
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/app/config/` folder directly
#### Setting up scanners
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.
If you are running PiHole you can synchronize devices directly. Check the [PiHole configuration guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PIHOLE_GUIDE.md) for details.
> [!NOTE]
> You can bulk-import devices via the [CSV import method](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md).
#### Community guides
You can read or watch several [community configuration guides](https://github.com/jokob-sk/NetAlertX/blob/main/docs/COMMUNITY_GUIDES.md) in Chinese, Korean, German, or French.
> Please note these might be outdated. Rely on official documentation first.
#### Common issues
- Before creating a new issue, please check if a similar issue was [already resolved](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed).
- Check also common issues and [debugging tips](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md).
## 💙 Support me
| [![GitHub](https://i.imgur.com/emsRCPh.png)](https://github.com/sponsors/jokob-sk) | [![Buy Me A Coffee](https://i.imgur.com/pIM6YXL.png)](https://www.buymeacoffee.com/jokobsk) | [![Patreon](https://i.imgur.com/MuYsrq1.png)](https://www.patreon.com/user?u=84385063) |
| --- | --- | --- |
- Bitcoin: `1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM`
- Ethereum: `0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7`
> 📧 Email me at [netalertx@gmail.com](mailto:netalertx@gmail.com?subject=NetAlertX Donations) if you want to get in touch or if I should add other sponsorship platforms.
[![Docker Size](https://img.shields.io/docker/image-size/jokobsk/netalertx?label=Size&logo=Docker&color=0aa8d2&logoColor=fff&style=for-the-badge)](https://hub.docker.com/r/jokobsk/netalertx)
[![Docker Pulls](https://img.shields.io/docker/pulls/jokobsk/netalertx?label=Pulls&logo=docker&color=0aa8d2&logoColor=fff&style=for-the-badge)](https://hub.docker.com/r/jokobsk/netalertx)
[![GitHub Release](https://img.shields.io/github/v/release/jokob-sk/NetAlertX?color=0aa8d2&logoColor=fff&logo=GitHub&style=for-the-badge)](https://github.com/jokob-sk/NetAlertX/releases)
[![Discord](https://img.shields.io/discord/1274490466481602755?color=0aa8d2&logoColor=fff&logo=Discord&style=for-the-badge)](https://discord.gg/NczTUTWyRr)
[![Home Assistant](https://img.shields.io/badge/Repo-blue?logo=home-assistant&style=for-the-badge&color=0aa8d2&logoColor=fff&label=Add)](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
# NetAlertX - Network scanner & notification framework
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
<a href="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" target="_blank">
<img src="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" width="1000px" />
</a>
Head to [https://netalertx.com/](https://netalertx.com/) for more gifs and screenshots 📷.
> [!NOTE]
> There is also an experimental 🧪 [bare-metal install](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md) method available.
## 📕 Basic Usage
> [!WARNING]
> You will have to run the container on the `host` network and specify `SCAN_SUBNETS` unless you use other [plugin scanners](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.
```bash
docker run -d --rm --network=host \
-v /local_data_dir:/data \
-v /etc/localtime:/etc/localtime \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
-e PORT=20211 \
-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"} \
ghcr.io/jokob-sk/netalertx:latest
```
See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md).
### Default ports
| Default | Description | How to override |
| :------------- |:-------------------------------| ----------------------------------------------------------------------------------:|
| `20211` |Port of the web interface | `-e PORT=20222` |
| `20212` |Port of the backend API server | `-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"}` or via the `GRAPHQL_PORT` Setting |
### Docker environment variables
| Variable | Description | Example Value |
| :------------- |:------------------------| -----:|
| `PORT` |Port of the web interface | `20211` |
| `LISTEN_ADDR` |Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. | `0.0.0.0` |
|`LOADED_PLUGINS` | Default [plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) to load. Plugins cannot be loaded with `APP_CONF_OVERRIDE`, you need to use this variable instead and then specify the plugins settings with `APP_CONF_OVERRIDE`. | `["PIHOLE","ASUSWRT"]` |
|`APP_CONF_OVERRIDE` | JSON override for settings (except `LOADED_PLUGINS`). | `{"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20212"}` |
|`ALWAYS_FRESH_INSTALL` | ⚠ If `true` will delete the content of the `/db` & `/config` folders. For testing purposes. Can be coupled with [watchtower](https://github.com/containrrr/watchtower) to have an always freshly installed `netalertx`/`netalertx-dev` image. | `true` |
> You can override the default GraphQL port setting `GRAPHQL_PORT` (set to `20212`) by using the `APP_CONF_OVERRIDE` env variable. `LOADED_PLUGINS` and settings in `APP_CONF_OVERRIDE` can be specified via the UI as well.
### Docker paths
> [!NOTE]
> See also [Backup strategies](https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md).
| Required | Path | Description |
| :------------- | :------------- | :-------------|
| | `:/data/config` | Folder which will contain the `app.conf` & `devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
| | `:/data/db` | Folder which will contain the `app.db` database file |
| | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is teh same as on teh server. |
| | `:/tmp/log` | Logs folder useful for debugging if you have issues setting up the container |
| | `:/tmp/api` | The [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. Path configurable via `NETALERTX_API` environment variable. |
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
> Use separate `db` and `config` directories, do not nest them.
### Initial setup
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/data/config/` folder directly
#### Setting up scanners
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.
If you are running PiHole you can synchronize devices directly. Check the [PiHole configuration guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PIHOLE_GUIDE.md) for details.
> [!NOTE]
> You can bulk-import devices via the [CSV import method](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md).
#### Community guides
You can read or watch several [community configuration guides](https://github.com/jokob-sk/NetAlertX/blob/main/docs/COMMUNITY_GUIDES.md) in Chinese, Korean, German, or French.
> Please note these might be outdated. Rely on official documentation first.
#### Common issues
- Before creating a new issue, please check if a similar issue was [already resolved](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed).
- Check also common issues and [debugging tips](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md).
## 💙 Support me
| [![GitHub](https://i.imgur.com/emsRCPh.png)](https://github.com/sponsors/jokob-sk) | [![Buy Me A Coffee](https://i.imgur.com/pIM6YXL.png)](https://www.buymeacoffee.com/jokobsk) | [![Patreon](https://i.imgur.com/MuYsrq1.png)](https://www.patreon.com/user?u=84385063) |
| --- | --- | --- |
- Bitcoin: `1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM`
- Ethereum: `0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7`
> 📧 Email me at [netalertx@gmail.com](mailto:netalertx@gmail.com?subject=NetAlertX Donations) if you want to get in touch or if I should add other sponsorship platforms.

203
docs/DOCKER_MAINTENANCE.md Normal file
View File

@@ -0,0 +1,203 @@
# The NetAlertX Container Operator's Guide
> [!WARNING]
> ⚠️ **Important:** The docker-compose has recently changed. Carefully read the [Migration guide](https://jokob-sk.github.io/NetAlertX/MIGRATION/?h=migrat#12-migration-from-netalertx-v25524) for detailed instructions.
This guide assumes you are starting with the official `docker-compose.yml` file provided with the project. We strongly recommend you start with or migrate to this file as your baseline and modify it to suit your specific needs (e.g., changing file paths). While there are many ways to configure NetAlertX, the default file is designed to meet the mandatory security baseline with layer-2 networking capabilities while operating securely and without startup warnings.
This guide provides direct, concise solutions for common NetAlertX administrative tasks. It is structured to help you identify a problem, implement the solution, and understand the details.
## Guide Contents
- Using a Local Folder for Configuration
- Migrating from a Local Folder to a Docker Volume
- Applying a Custom Nginx Configuration
- Mounting Additional Files for Plugins
> [!NOTE]
>
> Other relevant resources
> - [Fixing Permission Issues](./FILE_PERMISSIONS.md)
> - [Handling Backups](./BACKUPS.md)
> - [Accessing Application Logs](./LOGGING.md)
---
## Task: Using a Local Folder for Configuration
### Problem
You want to edit your `app.conf` and other configuration files directly from your host machine, instead of using a Docker-managed volume.
### Solution
1. Stop the container:
```bash
docker-compose down
```
2. (Optional but Recommended) Back up your data using the method in Part 1.
3. Create a local folder on your host machine (e.g., `/data/netalertx_config`).
4. Edit `docker-compose.yml`:
* **Comment out** the `netalertx_config` volume entry.
* **Uncomment** and **set the path** for the "Example custom local folder" bind mount entry.
```yaml
...
volumes:
# - type: volume
# source: netalertx_config
# target: /data/config
# read_only: false
...
# Example custom local folder called /data/netalertx_config
- type: bind
source: /data/netalertx_config
target: /data/config
read_only: false
...
```
5. (Optional) Restore your backup.
6. Restart the container:
```bash
docker-compose up -d
```
### About This Method
This replaces the Docker-managed volume with a "bind mount." This is a direct mapping between a folder on your host computer (`/data/netalertx_config`) and a folder inside the container (`/data/config`), allowing you to edit the files directly.
---
## Task: Migrating from a Local Folder to a Docker Volume
### Problem
You are currently using a local folder (bind mount) for your configuration (e.g., `/data/netalertx_config`) and want to switch to the recommended Docker-managed volume (`netalertx_config`).
### Solution
1. Stop the container:
```bash
docker-compose down
```
2. Edit `docker-compose.yml`:
* **Comment out** the bind mount entry for your local folder.
* **Uncomment** the `netalertx_config` volume entry.
```yaml
...
volumes:
- type: volume
source: netalertx_config
target: /data/config
read_only: false
...
# Example custom local folder called /data/netalertx_config
# - type: bind
# source: /data/netalertx_config
# target: /data/config
# read_only: false
...
```
3. (Optional) Initialize the volume:
```bash
docker-compose up -d && docker-compose down
```
4. Run the migration command (**replace `/data/netalertx_config` with your actual path**):
```bash
docker run --rm -v netalertx_config:/config -v /data/netalertx_config:/local-config alpine \
sh -c "tar -C /local-config -c . | tar -C /config -x"
```
5. Restart the container:
```bash
docker-compose up -d
```
### About This Method
This uses a temporary `alpine` container that mounts *both* your source folder (`/local-config`) and destination volume (`/config`). The `tar ... | tar ...` command safely copies all files, including hidden ones, preserving structure.
---
## Task: Applying a Custom Nginx Configuration
### Problem
You need to override the default Nginx configuration to add features like LDAP, SSO, or custom SSL settings.
### Solution
1. Stop the container:
```bash
docker-compose down
```
2. Create your custom config file on your host (e.g., `/data/my-netalertx.conf`).
3. Edit `docker-compose.yml`:
```yaml
...
# Use a custom Enterprise-configured nginx config for ldap or other settings
- /data/my-netalertx.conf:/tmp/nginx/active-config/netalertx.conf:ro
...
```
4. Restart the container:
```bash
docker-compose up -d
```
### About This Method
Dockers bind mount overlays your host file (`my-netalertx.conf`) on top of the default file inside the container. The container remains read-only, but Nginx reads your file as if it were the default.
---
## Task: Mounting Additional Files for Plugins
### Problem
A plugin (like `DHCPLSS`) needs to read a file from your host machine (e.g., `/var/lib/dhcp/dhcpd.leases`).
### Solution
1. Stop the container:
```bash
docker-compose down
```
2. Edit `docker-compose.yml` and add a new line under the `volumes:` section:
```yaml
...
volumes:
...
# Mount for DHCPLSS plugin
- /var/lib/dhcp/dhcpd.leases:/mnt/dhcpd.leases:ro
...
```
3. Restart the container:
```bash
docker-compose up -d
```
4. In the NetAlertX web UI, configure the plugin to read from:
```
/mnt/dhcpd.leases
```
### About This Method
This maps your host file to a new, read-only (`:ro`) location inside the container. The plugin can then safely read this file without exposing anything else on your host filesystem.

View File

@@ -8,12 +8,12 @@ This guide shows you how to set up **NetAlertX** using Portainers **Stacks**
## 1. Prepare Your Host
Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace `APP_FOLDER` with your preferred location, for example `/opt` here:
Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace `APP_FOLDER` with your preferred location, for example `/local_data_dir` here:
```bash
mkdir -p /opt/netalertx/config
mkdir -p /opt/netalertx/db
mkdir -p /opt/netalertx/log
mkdir -p /local_data_dir/netalertx/config
mkdir -p /local_data_dir/netalertx/db
mkdir -p /local_data_dir/netalertx/log
```
---
@@ -34,32 +34,27 @@ Copy and paste the following YAML into the **Web editor**:
services:
netalertx:
container_name: netalertx
# Use this line for stable release
image: "ghcr.io/jokob-sk/netalertx:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
# Or, use this for the latest development build
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
network_mode: "host"
restart: unless-stopped
cap_drop: # Drop all capabilities for enhanced security
- ALL
cap_add: # Re-add necessary capabilities
- NET_RAW
- NET_ADMIN
- NET_BIND_SERVICE
volumes:
- ${APP_FOLDER}/netalertx/config:/app/config
- ${APP_FOLDER}/netalertx/db:/app/db
# Optional: logs (useful for debugging setup issues, comment out for performance)
- ${APP_FOLDER}/netalertx/log:/app/log
# API storage options:
# (Option 1) tmpfs (default, best performance)
- type: tmpfs
target: /app/api
# (Option 2) bind mount (useful for debugging)
# - ${APP_FOLDER}/netalertx/api:/app/api
- ${APP_FOLDER}/netalertx/config:/data/config
- ${APP_FOLDER}/netalertx/db:/data/db
# to sync with system time
- /etc/localtime:/etc/localtime:ro
tmpfs:
# All writable runtime state resides under /tmp; comment out to persist logs between restarts
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
environment:
- TZ=${TZ}
- PORT=${PORT}
- APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}
```
@@ -70,14 +65,26 @@ services:
In the **Environment variables** section of Portainer, add the following:
* `APP_FOLDER=/opt` (or wherever you created the directories in step 1)
* `TZ=Europe/Berlin` (replace with your timezone)
* `APP_FOLDER=/local_data_dir` (or wherever you created the directories in step 1)
* `PORT=22022` (or another port if needed)
* `APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22023"}` (optional advanced settings)
* `APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22023"}` (optional advanced settings, otherwise the backend API server PORT defaults to `20212`)
---
## 5. Deploy the Stack
## 5. Ensure permissions
> [!TIP]
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
>
> `sudo chown -R 20211:20211 /local_data_dir`
>
> `sudo chmod -R a+rwx /local_data_dir`
>
---
## 6. Deploy the Stack
1. Scroll down and click **Deploy the stack**.
2. Portainer will pull the image and start NetAlertX.
@@ -89,9 +96,9 @@ http://<your-docker-host-ip>:22022
---
## 6. Verify and Troubleshoot
## 7. Verify and Troubleshoot
* Check logs via Portainer → **Containers**`netalertx`**Logs**.
* Logs are stored under `${APP_FOLDER}/netalertx/log` if you enabled that volume.
Once the application is running, configure it by reading the [initial setup](INITIAL_SETUP.md) guide, or [troubleshoot common issues](COMMON_ISSUES.md).
Once the application is running, configure it by reading the [initial setup](INITIAL_SETUP.md) guide, or [troubleshoot common issues](COMMON_ISSUES.md).

View File

@@ -41,15 +41,7 @@ Use the following Compose snippet to deploy NetAlertX with a **static LAN IP** a
services:
netalertx:
image: ghcr.io/jokob-sk/netalertx:latest
ports:
- 20211:20211
volumes:
- /mnt/YOUR_SERVER/netalertx/config:/app/config:rw
- /mnt/YOUR_SERVER/netalertx/db:/netalertx/app/db:rw
- /mnt/YOUR_SERVER/netalertx/logs:/netalertx/app/log:rw
environment:
- TZ=Europe/London
- PORT=20211
...
networks:
swarm-ipvlan:
ipv4_address: 192.168.1.240 # ⚠️ Choose a free IP from your LAN

View File

@@ -1,23 +1,96 @@
# Managing File Permissions for NetAlertX on Nginx with Docker
# Managing File Permissions for NetAlertX on a Read-Only Container
Sometimes, permission issues arise if your existing host directories were created by a previous container running as root or another UID. The container will fail to start with "Permission Denied" errors.
> [!TIP]
> If you are facing permission issues, try to start the container without mapping your volumes. If that works, then the issue is permission related. You can try e.g., the following command:
> ```
> docker run -d --rm --network=host \
> -e TZ=Europe/Berlin \
> -e PUID=200 -e PGID=200 \
> -e PORT=20211 \
> ghcr.io/jokob-sk/netalertx:latest
> ```
NetAlertX runs on an Nginx web server. On Alpine Linux, Nginx operates as the `nginx` user (if PUID and GID environment variables are not specified, nginx user UID will be set to 102, and its supplementary group `www-data` ID to 82). Consequently, files accessed or written by the NetAlertX application are owned by `nginx:www-data`.
> NetAlertX runs in a **secure, read-only Alpine-based container** under a dedicated `netalertx` user (UID 20211, GID 20211). All writable paths are either mounted as **persistent volumes** or **`tmpfs` filesystems**. This ensures consistent file ownership and prevents privilege escalation.
Upon starting, NetAlertX changes nginx user UID and www-data GID to specified values (or defaults), and the ownership of files on the host system mapped to `/app/config` and `/app/db` in the container to `nginx:www-data`. This ensures that Nginx can access and write to these files. Since the user in the Docker container is mapped to a user on the host system by ID:GID, the files in `/app/config` and `/app/db` on the host system are owned by a user with the same ID and GID (defaults are ID 102 and GID 82). On different systems, this ID:GID may belong to different users, or there may not be a group with ID 82 at all.
Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server.
Option to set specific user UID and GID can be useful for host system users needing to access these files (e.g., backup scripts).
```bash
docker run --rm --network=host \
-v /etc/localtime:/etc/localtime:ro \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
-e PORT=20211 \
ghcr.io/jokob-sk/netalertx:latest
```
> [!WARNING]
> The above should be only used as a test - once the container restarts, all data is lost.
---
## Writable Paths
NetAlertX requires certain paths to be writable at runtime. These paths should be mounted either as **host volumes** or **`tmpfs`** in your `docker-compose.yml` or `docker run` command:
| Path | Purpose | Notes |
| ------------------------------------ | ----------------------------------- | ------------------------------------------------------ |
| `/data/config` | Application configuration | Persistent volume recommended |
| `/data/db` | Database files | Persistent volume recommended |
| `/tmp/log` | Logs | Lives under `/tmp`; optional host bind to retain logs |
| `/tmp/api` | API cache | Subdirectory of `/tmp` |
| `/tmp/nginx/active-config` | Active nginx configuration override | Mount `/tmp` (or override specific file) |
| `/tmp/run` | Runtime directories for nginx & PHP | Subdirectory of `/tmp` |
| `/tmp` | PHP session save directory | Backed by `tmpfs` for runtime writes |
> Mounting `/tmp` as `tmpfs` automatically covers all of its subdirectories (`log`, `api`, `run`, `nginx/active-config`, etc.).
> All these paths will have **UID 20211 / GID 20211** inside the container. Files on the host will appear owned by `20211:20211`.
---
### Solution
1. **Run the container once as root** (`--user "0"`) to allow it to correct permissions automatically:
```bash
docker run -it --rm --name netalertx --user "0" \
-v /local_data_dir:/data \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
ghcr.io/jokob-sk/netalertx:latest
```
2. Wait for logs showing **permissions being fixed**. The container will then **hang intentionally**.
3. Press **Ctrl+C** to stop the container.
4. Start the container normally with your `docker-compose.yml` or `docker run` command.
> The container startup script detects `root` and runs `chown -R 20211:20211` on all volumes, fixing ownership for the secure `netalertx` user.
> [!TIP]
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
>
> `sudo chown -R 20211:20211 /local_data_dir`
>
> `sudo chmod -R a+rwx /local_data_dir`
>
---
## Example: docker-compose.yml with `tmpfs`
```yaml
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx"
network_mode: "host"
cap_drop: # Drop all capabilities for enhanced security
- ALL
cap_add: # Add only the necessary capabilities
- NET_ADMIN # Required for ARP scanning
- NET_RAW # Required for raw socket operations
- NET_BIND_SERVICE # Required to bind to privileged ports (nbtscan)
restart: unless-stopped
volumes:
- /local_data_dir:/data
- /etc/localtime:/etc/localtime
environment:
- PORT=20211
tmpfs:
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
```
> This setup ensures all writable paths are either in `tmpfs` or host-mounted, and the container never writes outside of controlled volumes.
### Permissions Table for Individual Folders
| Folder | User | User ID | Group | Group ID | Permissions | Notes |
|----------------|--------|---------|-----------|----------|-------------|---------------------------------------------------------------------|
| `/app/config` | nginx | PUID (default 102) | www-data | PGID (default 82) | rwxr-xr-x | Ensure `nginx` can read/write; other users can read if in `www-data` |
| `/app/db` | nginx | PUID (default 102) | www-data | PGID (default 82) | rwxr-xr-x | Same as above |

View File

@@ -31,10 +31,11 @@ To improve presence accuracy and reduce false offline states:
### ✅ Increase ARP Scan Timeout
Extend the ARP scanner timeout to ensure full subnet coverage:
Extend the ARP scanner timeout and DURATION to ensure full subnet coverage:
```env
ARPSCAN_RUN_TIMEOUT=360
ARPSCAN_DURATION=30
```
> Adjust based on your network size and device count.

View File

@@ -1,4 +1,4 @@
# NetAlertX Community Helper Scripts Overview
# Community Helper Scripts Overview
This page provides an overview of community-contributed scripts for NetAlertX. These scripts are not actively maintained and are provided as-is.
@@ -14,8 +14,8 @@ You can find all scripts in this [scripts GitHub folder](https://github.com/joko
## Important Notes
> [!NOTE]
> These scripts are community-supplied and not actively maintained. Use at your own discretion.
> [!NOTE]
> These scripts are community-supplied and not actively maintained. Use at your own discretion.
For detailed usage instructions, refer to each script's documentation in each [scripts GitHub folder](https://github.com/jokob-sk/NetAlertX/tree/main/scripts).

View File

@@ -5,7 +5,7 @@ To download and install NetAlertX on the hardware/server directly use the `curl`
> [!NOTE]
> This is an Experimental feature 🧪 and it relies on community support.
>
> 🙏 Looking for maintainers for this installation method 🙂 Current community volunteers:
> 🙏 Looking for maintainers for this installation method 🙂 Current community volunteers:
> - [slammingprogramming](https://github.com/slammingprogramming)
> - [ingoratsdorf](https://github.com/ingoratsdorf)
>
@@ -13,8 +13,7 @@ To download and install NetAlertX on the hardware/server directly use the `curl`
> Data loss is a possibility, **it is recommended to install NetAlertX using the supplied Docker image**.
> [!WARNING]
> A warning to the installation method below: Piping to bash is [controversial](https://pi-hole.net/2016/07/25/curling-and-piping-to-bash) and may
be dangerous, as you cannot see the code that's about to be executed on your system.
> A warning to the installation method below: Piping to bash is [controversial](https://pi-hole.net/2016/07/25/curling-and-piping-to-bash) and may be dangerous, as you cannot see the code that's about to be executed on your system.
If you trust this repo, you can download the install script via one of the methods (curl/wget) below and it will fo its best to install NetAlertX on your system.
@@ -40,7 +39,7 @@ Some facts about what and where something will be changed/installed by the HW in
- Only tested to work on the system listed in the install directory.
- **EXPERIMENTAL** and not recommended way to install NetAlertX.
> [!TIP]
> [!TIP]
> If the below fails try grabbing and installing one of the [previous releases](https://github.com/jokob-sk/NetAlertX/releases) and run the installation from the zip package.
These commands will download the `install.debian12.sh` script from the GitHub repository, make it executable with `chmod`, and then run it using `./install.debian12.sh`.
@@ -63,13 +62,28 @@ wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/
## 📥 Ubuntu 24 (Noble Numbat)
> [!NOTE]
> Maintained by [ingoratsdorf](https://github.com/ingoratsdorf)
### Installation via curl
```bash
curl -o install.ubuntu24.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.ubuntu24.sh && sudo chmod +x install.ubuntu24.sh && sudo ./install.ubuntu24.sh
curl -o install.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh && sudo chmod +x install.sh && sudo ./install.sh
```
### Installation via wget
```bash
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.ubuntu24.sh -O install.ubuntu24.sh && sudo chmod +x install.ubuntu24.sh && sudo ./install.ubuntu24.sh
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh -O install.sh && sudo chmod +x install.sh && sudo ./install.sh
```
## 📥 Bare Metal - Proxmox
> [!NOTE]
> Use this on a clean LXC/VM for Debian 13 OR Ubuntu 24.
> The Scipt will detect OS and build acordingly.
> Maintained by [JVKeller](https://github.com/JVKeller)
### Installation via wget
```bash
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh
```

View File

@@ -4,7 +4,7 @@
NetAlertX can be installed several ways. The best supported option is Docker, followed by a supervised Home Assistant instance, as an Unraid app, and lastly, on bare metal.
- [[Installation] Docker (recommended)](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md)
- [[Installation] Docker (recommended)](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md)
- [[Installation] Home Assistant](https://github.com/alexbelgium/hassio-addons/tree/master/netalertx)
- [[Installation] Unraid App](https://unraid.net/community/apps)
- [[Installation] Bare metal (experimental - looking for maintainers)](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md)

View File

@@ -1,17 +1,15 @@
# Logging
NetAlertX comes with several logs that help to identify application issues.
For plugin-specific log debugging, please read the [Debug Plugins](./DEBUG_PLUGINS.md) guide.
When debugging any issue, increase the `LOG_LEVEL` Setting as per the [Debug tips](./DEBUG_TIPS.md) documentation.
NetAlertX comes with several logs that help to identify application issues. These include nginx logs, app, or plugin logs. For plugin-specific log debugging, please read the [Debug Plugins](./DEBUG_PLUGINS.md) guide.
> [!NOTE]
> When debugging any issue, increase the `LOG_LEVEL` Setting as per the [Debug tips](./DEBUG_TIPS.md) documentation.
## Main logs
You can find most of the logs exposed in the UI under _Maintenance -> Logs_.
If the UI is inaccessible, you can access them under `/app/log`.
If the UI is inaccessible, you can access them under `/tmp/log`.
![Logs](./img/LOGGING/maintenance_logs.png)
@@ -24,3 +22,54 @@ If a Plugin supplies data to the main app it's done either vie a SQL query or vi
The data is in most of the cases then displayed in the application under _Integrations -> Plugins_ (or _Device -> Plugins_ if the plugin is supplying device-specific data).
![Plugin objects](./img/LOGGING/logging_integrations_plugins.png)
## Viewing Logs on the File System
You cannot find any log files on the filesystem. The container is `read-only` and writes logs to a temporary in-memory filesystem (`tmpfs`) for security and performance. The application follows container best-practices by writing all logs to the standard output (`stdout`) and standard error (`stderr`) streams. Docker's logging driver (set in `docker-compose.yml`) captures this stream automatically, allowing you to access it with the `docker logs <image_name>` command.
* **To see all logs since the last restart:**
```bash
docker logs netalertx
```
* **To watch the logs live (live feed):**
```bash
docker logs -f netalertx
```
## Enabling Persistent File-Based Logs
The default logs are erased every time the container restarts because they are stored in temporary in-memory storage (`tmpfs`). If you need to keep a persistent, file-based log history, follow the steps below.
> [!NOTE]
> This might lead to performance degradation so this approach is only suggested when actively debugging issues. See the [Performance optimization](./PERFORMANCE.md) documentation for details.
1. Stop the container:
```bash
docker-compose down
```
2. Edit your `docker-compose.yml` file:
* **Comment out** the `/tmp/log` line under the `tmpfs:` section.
* **Uncomment** the "Retain logs" line under the `volumes:` section and set your desired host path.
```yaml
...
tmpfs:
# - "/tmp/log:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
...
volumes:
...
# Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts
- /home/adam/netalertx_logs:/tmp/log
...
```
3. Restart the container:
```bash
docker-compose up -d
```
This change stops Docker from mounting a temporary in-memory volume at `/tmp/log`. Instead, it "bind mounts" a persistent folder from your host computer (e.g., `/data/netalertx_logs`) to that *same location* inside the container.

View File

@@ -1,138 +1,292 @@
# Migration form PiAlert to NetAlertX
# Migration
> [!WARNING]
> Follow this guide only after you you downloaded and started a version of NetAlertX prior to v25.6.7 (e.g. `docker pull ghcr.io/jokob-sk/netalertx:25.5.24`) at least once after previously using the PiAlert image. Later versions don't support migration and devices and settings will have to migrated manually, e.g. via [CSV import](./DEVICES_BULK_EDITING.md).
When upgrading from older versions of NetAlertX (or PiAlert by jokob-sk), follow the migration steps below to ensure your data and configuration are properly transferred.
## STEPS:
> [!TIP]
> It's always important to have a [backup strategy](./BACKUPS.md) in place.
> [!TIP]
> In short: The application will auto-migrate the database, config, and all device information. A ticker message on top will be displayed until you update your docker mount points. It's always good to have a [backup strategy](./BACKUPS.md) in place.
## Migration scenarios
1. Backup your current config and database (optional `devices.csv` to have a backup) (See bellow tip if facing issues)
2. Stop the container
2. Update the Docker file mount locations in your `docker-compose.yml` or docker run command (See bellow **New Docker mount locations**).
3. Rename the DB and conf files to `app.db` and `app.conf` and place them in the appropriate location.
4. Start the Container
- You are running PiAlert (by jokob-sk)
→ [Read the 1.1 Migration from PiAlert to NetAlertX `v25.5.24`](#11-migration-from-pialert-to-netalertx-v25524)
- You are running NetAlertX (by jokob-sk) `25.5.24` or older
→ [Read the 1.2 Migration from NetAlertX `v25.5.24`](#12-migration-from-netalertx-v25524)
- You are running NetAlertX (by jokob-sk) (`v25.6.7` to `v25.10.1`)
→ [Read the 1.3 Migration from NetAlertX `v25.10.1`](#13-migration-from-netalertx-v25101)
> [!TIP]
> If you have troubles accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: `cp -r /app/config /home/pi/pialert/config/old_backup_files`. This should create a folder in the `config` directory called `old_backup_files` conatining all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.
### 1.0 Manual Migration
You can migrate data manually, for example by exporting and importing devices using the [CSV import](./DEVICES_BULK_EDITING.md) method.
### New Docker mount locations
### 1.1 Migration from PiAlert to NetAlertX `v25.5.24`
The application installation folder in the docker container has changed from `/home/pi/pialert` to `/app`. That means the new mount points are:
#### STEPS:
| Old mount point | New mount point |
|----------------------|---------------|
| `/home/pi/pialert/config` | `/app/config` |
| `/home/pi/pialert/db` | `/app/db` |
The application will automatically migrate the database, configuration, and all device information.
A banner message will appear at the top of the web UI reminding you to update your Docker mount points.
1. Stop the container
2. [Back up your setup](./BACKUPS.md)
3. Update the Docker file mount locations in your `docker-compose.yml` or docker run command (See below **New Docker mount locations**).
4. Rename the DB and conf files to `app.db` and `app.conf` and place them in the appropriate location.
5. Start the container
> [!TIP]
> If you have trouble accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: `cp -r /data/config /home/pi/pialert/config/old_backup_files`. This should create a folder in the `config` directory called `old_backup_files` containing all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.
#### New Docker mount locations
The internal application path in the container has changed from `/home/pi/pialert` to `/app`. Update your volume mounts as follows:
| Old mount point | New mount point |
|----------------------|---------------|
| `/home/pi/pialert/config` | `/data/config` |
| `/home/pi/pialert/db` | `/data/db` |
If you were mounting files directly, please note the file names have changed:
| Old file name | New file name |
|----------------------|---------------|
| Old file name | New file name |
|----------------------|---------------|
| `pialert.conf` | `app.conf` |
| `pialert.db` | `app.db` |
> [!NOTE]
> The application uses symlinks linking the old db and config locations to the new ones, so data loss should not occur. [Backup strategies](./BACKUPS.md) are still recommended to backup your setup.
> [!NOTE]
> The application automatically creates symlinks from the old database and config locations to the new ones, so data loss should not occur. Read the [backup strategies](./BACKUPS.md) guide to backup your setup.
# Examples
#### Examples
Examples of docker files with the new mount points.
## Example 1: Mapping folders
##### Example 1: Mapping folders
### Old docker-compose.yml
###### Old docker-compose.yml
```yaml
services:
pialert:
container_name: pialert
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "jokobsk/pialert:latest"
network_mode: "host"
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "jokobsk/pialert:latest"
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/home/pi/pialert/config
- local/path/db:/home/pi/pialert/db
- /local_data_dir/config:/home/pi/pialert/config
- /local_data_dir/db:/home/pi/pialert/db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/home/pi/pialert/front/log
- /local_data_dir/logs:/home/pi/pialert/front/log
environment:
- TZ=Europe/Berlin
- TZ=Europe/Berlin
- PORT=20211
```
### New docker-compose.yml
###### New docker-compose.yml
```yaml
services:
netalertx: # This has changed (🟡optional)
container_name: netalertx # This has changed (🟡optional)
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest" # ⚠ This has changed (🟡optional/🔺required in future)
network_mode: "host"
netalertx: # 🆕 This has changed
container_name: netalertx # 🆕 This has changed
image: "ghcr.io/jokob-sk/netalertx:25.5.24" # 🆕 This has changed
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/app/config # This has changed (🔺required)
- local/path/db:/app/db # This has changed (🔺required)
- /local_data_dir/config:/data/config # 🆕 This has changed
- /local_data_dir/db:/data/db # 🆕 This has changed
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/app/log # This has changed (🟡optional)
- /local_data_dir/logs:/tmp/log # 🆕 This has changed
environment:
- TZ=Europe/Berlin
- TZ=Europe/Berlin
- PORT=20211
```
## Example 2: Mapping files
##### Example 2: Mapping files
> [!NOTE]
> The recommendation is to map folders as in Example 1, map files directly only when needed.
> [!NOTE]
> The recommendation is to map folders as in Example 1, map files directly only when needed.
### Old docker-compose.yml
###### Old docker-compose.yml
```yaml
services:
pialert:
container_name: pialert
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "jokobsk/pialert:latest"
network_mode: "host"
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "jokobsk/pialert:latest"
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config/pialert.conf:/home/pi/pialert/config/pialert.conf
- local/path/db/pialert.db:/home/pi/pialert/db/pialert.db
- /local_data_dir/config/pialert.conf:/home/pi/pialert/config/pialert.conf
- /local_data_dir/db/pialert.db:/home/pi/pialert/db/pialert.db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/home/pi/pialert/front/log
- /local_data_dir/logs:/home/pi/pialert/front/log
environment:
- TZ=Europe/Berlin
- TZ=Europe/Berlin
- PORT=20211
```
### New docker-compose.yml
###### New docker-compose.yml
```yaml
services:
netalertx: # This has changed (🟡optional)
container_name: netalertx # This has changed (🟡optional)
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest" # ⚠ This has changed (🟡optional/🔺required in future)
network_mode: "host"
netalertx: # 🆕 This has changed
container_name: netalertx # 🆕 This has changed
image: "ghcr.io/jokob-sk/netalertx:25.5.24" # 🆕 This has changed
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config/app.conf:/app/config/app.conf # This has changed (🔺required)
- local/path/db/app.db:/app/db/app.db # This has changed (🔺required)
- /local_data_dir/config/app.conf:/data/config/app.conf # 🆕 This has changed
- /local_data_dir/db/app.db:/data/db/app.db # 🆕 This has changed
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/app/log # This has changed (🟡optional)
- /local_data_dir/logs:/tmp/log # 🆕 This has changed
environment:
- TZ=Europe/Berlin
- TZ=Europe/Berlin
- PORT=20211
```
### 1.2 Migration from NetAlertX `v25.5.24`
Versions before `v25.10.1` require an intermediate migration through `v25.5.24` to ensure database compatibility. Skipping this step may cause compatibility issues due to database schema changes introduced after `v25.5.24`.
#### STEPS:
1. Stop the container
2. [Back up your setup](./BACKUPS.md)
3. Upgrade to `v25.5.24` by pinning the release version (See Examples below)
4. Start the container and verify everything works as expected.
5. Stop the container
6. Upgrade to `v25.10.1` by pinning the release version (See Examples below)
7. Start the container and verify everything works as expected.
#### Examples
Examples of docker files with the tagged version.
##### Example 1: Mapping folders
###### docker-compose.yml changes
```yaml
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx:25.5.24" # 🆕 This is important
network_mode: "host"
restart: unless-stopped
volumes:
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# (optional) useful for debugging if you have issues setting up the container
- /local_data_dir/logs:/tmp/log
environment:
- TZ=Europe/Berlin
- PORT=20211
```
```yaml
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx:25.10.1" # 🆕 This is important
network_mode: "host"
restart: unless-stopped
volumes:
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# (optional) useful for debugging if you have issues setting up the container
- /local_data_dir/logs:/tmp/log
environment:
- TZ=Europe/Berlin
- PORT=20211
```
### 1.3 Migration from NetAlertX `v25.10.1`
Starting from v25.10.1, the container uses a [more secure, read-only runtime environment](./SECURITY_FEATURES.md), which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as `tmpfs` or permanent writable volumes, with sufficient access [permissions](./FILE_PERMISSIONS.md). The data location has also hanged from `/app/db` and `/app/config` to `/data/db` and `/data/config`. See detailed steps below.
#### STEPS:
1. Stop the container
2. [Back up your setup](./BACKUPS.md)
3. Upgrade to `v25.10.1` by pinning the release version (See the example below)
```yaml
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx:25.10.1" # 🆕 This is important
network_mode: "host"
restart: unless-stopped
volumes:
- /local_data_dir/config:/app/config
- /local_data_dir/db:/app/db
# (optional) useful for debugging if you have issues setting up the container
- /local_data_dir/logs:/tmp/log
environment:
- TZ=Europe/Berlin
- PORT=20211
```
4. Start the container and verify everything works as expected.
5. Stop the container.
6. Perform a one-off migration to the latest `netalertx` image and `20211` user:
> [!NOTE]
> The example below assumes your `/config` and `/db` folders are stored in `local_data_dir`.
> Replace this path with your actual configuration directory. `netalertx` is the container name, which might differ from your setup.
```sh
docker run -it --rm --name netalertx --user "0" \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
ghcr.io/jokob-sk/netalertx:latest
```
..or alternatively execute:
```bash
sudo chown -R 20211:20211 /local_data_dir
sudo chmod -R a+rwx /local_data_dir/
```
7. Stop the container
8. Update the `docker-compose.yml` as per example below.
```yaml
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx" # 🆕 This has changed
network_mode: "host"
cap_drop: # 🆕 New line
- ALL # 🆕 New line
cap_add: # 🆕 New line
- NET_RAW # 🆕 New line
- NET_ADMIN # 🆕 New line
- NET_BIND_SERVICE # 🆕 New line
restart: unless-stopped
volumes:
- /local_data_dir:/data # 🆕 This folder contains your /db and /config directories and the parent changed from /app to /data
# Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
- /etc/localtime:/etc/localtime:ro # 🆕 New line
environment:
- PORT=20211
# 🆕 New "tmpfs" section START 🔽
tmpfs:
# All writable runtime state resides under /tmp; comment out to persist logs between restarts
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
# 🆕 New "tmpfs" section END 🔼
```
9. Start the container and verify everything works as expected.

View File

@@ -1,6 +1,6 @@
## How to Set Up Your Network Page
The **Network** page lets you map how devices connect — visually and logically.
The **Network** page lets you map how devices connect — visually and logically.
Its especially useful for planning infrastructure, assigning parent-child relationships, and spotting gaps.
![Network tree details](./img/NETWORK_TREE/Network_Sample.png)
@@ -9,11 +9,11 @@ To get started, youll need to define at least one root node and mark certain
---
Start by creating a root device with the MAC address `Internet`, if the application didnt create one already.
This special MAC address (`Internet`) is required for the root network node — no other value is currently supported.
Start by creating a root device with the MAC address `Internet`, if the application didnt create one already.
This special MAC address (`Internet`) is required for the root network node — no other value is currently supported.
Set its **Type** to a valid network type — such as `Router` or `Gateway`.
> [!TIP]
> [!TIP]
> If you dont have one, use the [Create new device](./DEVICE_MANAGEMENT.md#dummy-devices) button on the **Devices** page to add a root device.
---
@@ -21,15 +21,15 @@ Set its **Type** to a valid network type — such as `Router` or `Gateway`.
## ⚡ Quick Setup
1. Open the device you want to use as a network node (e.g. a Switch).
2. Set its **Type** to one of the following:
`AP`, `Firewall`, `Gateway`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
2. Set its **Type** to one of the following:
`AP`, `Firewall`, `Gateway`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
*(Or add custom types under **Settings → General → `NETWORK_DEVICE_TYPES`**.)*
3. Save the device.
4. Go to the **Network** page — supported device types will appear as tabs.
5. Use the **Assign** button to connect unassigned devices to a network node.
6. If the **Port** is `0` or empty, a Wi-Fi icon is shown. Otherwise, an Ethernet icon appears.
> [!NOTE]
> [!NOTE]
> Use [bulk editing](./DEVICES_BULK_EDITING.md) with _CSV Export_ to fix `Internet` root assignments or update many devices at once.
---
@@ -42,20 +42,22 @@ Lets walk through setting up a device named `raspberrypi` to act as a network
### 1. Set Device Type and Parent
- Go to the **Devices** page
- Go to the **Devices** page
- Open the device detail view for `raspberrypi`
- In the **Type** dropdown, select `Switch`
![Device details](./img/NETWORK_TREE/Network_Device_Details.png)
- Optionally assign a **Parent Node** (where this device connects to) and the **Relationship type** of the connection.
- Optionally assign a **Parent Node** (where this device connects to) and the **Relationship type** of the connection.
The `nic` relationship type can affect parent notifications — see the setting description and [Notifications documentation](./NOTIFICATIONS.md) for more.
- A devices parent MAC will be overwritten by plugins if its current value is any of the following: "null", "(unknown)" "(Unknown)".
- If you want plugins to be able to overwrite the parent value (for example, when mixing plugins that do not provide parent MACs like `ARPSCAN` with those that do, like `UNIFIAPI`), you must set the setting `NEWDEV_devParentMAC` to None.
![Device details](./img/NETWORK_TREE/Network_Device_Details_Parent.png)
![Device details](./img/NETWORK_TREE/Network_Device_Details_Parent.png)
> [!NOTE]
> Only certain device types can act as network nodes:
> `AP`, `Firewall`, `Gateway`, `Hypervisor`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
> [!NOTE]
> Only certain device types can act as network nodes:
> `AP`, `Firewall`, `Gateway`, `Hypervisor`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
> You can add custom types via the `NETWORK_DEVICE_TYPES` setting.
- Click **Save**
@@ -81,7 +83,7 @@ You can confirm that `raspberrypi` now acts as a network device in two places:
### 3. Assign Connected Devices
- Use the **Assign** button to link other devices (e.g. PCs) to `raspberrypi`.
- After assigning, connected devices will appear beneath the `raspberrypi` switch node.
- After assigning, connected devices will appear beneath the `raspberrypi` switch node.
![Assigned nodes](./img/NETWORK_TREE/Network_Assigned_Nodes.png)
@@ -92,9 +94,9 @@ You can confirm that `raspberrypi` now acts as a network device in two places:
> Hovering over devices in the tree reveals connection details and tooltips for quick inspection.
> [!NOTE]
> Selecting certain relationship types hides the device in the default device views.
> You can change this behavior by adjusting the `UI_hide_rel_types` setting, which by default is set to `["nic","virtual"]`.
> This means devices with `devParentRelType` set to `nic` or `virtual` will not be shown.
> Selecting certain relationship types hides the device in the default device views.
> You can change this behavior by adjusting the `UI_hide_rel_types` setting, which by default is set to `["nic","virtual"]`.
> This means devices with `devParentRelType` set to `nic` or `virtual` will not be shown.
> All devices, regardless of relationship type, are always accessible in the **All devices** view.
---

View File

@@ -44,14 +44,19 @@ In Notification Processing settings, you can specify blanket rules. These allow
1. Notify on (`NTFPRCS_INCLUDED_SECTIONS`) allows you to specify which events trigger notifications. Usual setups will have `new_devices`, `down_devices`, and possibly `down_reconnected` set. Including `plugin` (dependenton the Plugin `<plugin>_WATCH` and `<plugin>_REPORT_ON` settings) and `events` (dependent on the on-device **Alert Events** setting) might be too noisy for most setups. More info in the [NTFPRCS plugin](https://github.com/jokob-sk/NetAlertX/blob/main/front/plugins/notification_processing/README.md) on what events these selections include.
2. Alert down after (`NTFPRCS_alert_down_time`) is useful if you want to wait for some time before the system sends out a down notification for a device. This is related to the on-device **Alert down** setting and only devices with this checked will trigger a down notification.
3. A filter to allow you to set device-specific exceptions to New devices being added to the app.
4. A filter to allow you to set device-specific exceptions to generated Events.
## Ignoring devices 🔕
You can filter out unwanted notifications globally. This could be because of a misbehaving device (GoogleNest/GoogleHub (See also [ARPSAN docs and the `--exclude-broadcast` flag](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/arp_scan#ip-flipping-on-google-nest-devices))) which flips between IP addresses, or because you want to ignore new device notifications of a certain pattern.
1. Events Filter (`NTFPRCS_event_condition`) - Filter out Events from notifications.
2. New Devices Filter (`NTFPRCS_new_dev_condition`) - Filter out New Devices from notifications, but log and keep a new device in the system.
## Ignoring devices 💻
![Ignoring new devices](./img/NOTIFICATIONS/NEWDEV_ignores.png)
You can completely ignore detected devices globally. This could be because your instance detects docker containers, you want to ignore devices from a specific manufacturer via MAC rules or you want to ignore devices on a specific IP range.
1. Ignored MACs (`NEWDEV_ignored_MACs`) - List of MACs to ignore.
2. Ignored IPs (`NEWDEV_ignored_IPs`) - List of IPs to ignore.
2. Ignored IPs (`NEWDEV_ignored_IPs`) - List of IPs to ignore.

View File

@@ -1,47 +1,50 @@
# Performance Optimization Guide
There are several ways to improve the application's performance. The application has been tested on a range of devices, from a Raspberry Pi 4 to NAS and NUC systems. If you are running the application on a lower-end device, carefully fine-tune the performance settings to ensure an optimal user experience.
There are several ways to improve the application's performance. The application has been tested on a range of devices, from Raspberry Pi 4 units to NAS and NUC systems. If you are running the application on a lower-end device, fine-tuning the performance settings can significantly improve the user experience.
## Common Causes of Slowness
Performance issues are usually caused by:
- **Incorrect settings** The app may restart unexpectedly. Check `app.log` under **Maintenance → Logs** for details.
- **Too many background processes** Disable unnecessary scanners.
- **Long scan durations** Limit the number of scanned devices.
- **Excessive disk operations** Optimize scanning and logging settings.
- **Failed maintenance plugins** Ensure maintenance tasks are running properly.
* **Incorrect settings** The app may restart unexpectedly. Check `app.log` under **Maintenance → Logs** for details.
* **Too many background processes** Disable unnecessary scanners.
* **Long scan durations** Limit the number of scanned devices.
* **Excessive disk operations** Optimize scanning and logging settings.
* **Maintenance plugin failures** If cleanup tasks fail, performance can degrade over time.
The application performs regular maintenance and database cleanup. If these tasks fail, performance may degrade.
The application performs regular maintenance and database cleanup. If these tasks are failing, you will see slowdowns.
### Database and Log File Size
A large database or oversized log files can slow down performance. You can check database and table sizes on the **Maintenance** page.
A large database or oversized log files can impact performance. You can check database and table sizes on the **Maintenance** page.
![DB size check](./img/PERFORMANCE/db_size_check.png)
> [!NOTE]
> - For **~100 devices**, the database should be around **50MB**.
> - No table should exceed **10,000 rows** in a healthy system.
> - These numbers vary based on network activity and settings.
>
> * For **~100 devices**, the database should be around **50 MB**.
> * No table should exceed **10,000 rows** in a healthy system.
> * Actual values vary based on network activity and plugin settings.
---
## Maintenance Plugins
Two plugins help maintain the applications performance:
Two plugins help maintain the systems performance:
### **1. Database Cleanup (DBCLNP)**
- Responsible for database maintenance.
- Check settings in the [DB Cleanup Plugin Docs](/front/plugins/db_cleanup/README.md).
- Ensure its not failing by checking logs.
- Adjust the schedule (`DBCLNP_RUN_SCHD`) and timeout (`DBCLNP_RUN_TIMEOUT`) if needed.
* Handles database maintenance and cleanup.
* See the [DB Cleanup Plugin Docs](/front/plugins/db_cleanup/README.md).
* Ensure its not failing by checking logs.
* Adjust the schedule (`DBCLNP_RUN_SCHD`) and timeout (`DBCLNP_RUN_TIMEOUT`) if necessary.
### **2. Maintenance (MAINT)**
- Handles log cleanup and other maintenance tasks.
- Check settings in the [Maintenance Plugin Docs](/front/plugins/maintenance/README.md).
- Ensure its running correctly by checking logs.
- Adjust the schedule (`MAINT_RUN_SCHD`) and timeout (`MAINT_RUN_TIMEOUT`) if needed.
* Cleans logs and performs general maintenance tasks.
* See the [Maintenance Plugin Docs](/front/plugins/maintenance/README.md).
* Verify proper operation via logs.
* Adjust the schedule (`MAINT_RUN_SCHD`) and timeout (`MAINT_RUN_TIMEOUT`) if needed.
---
@@ -50,47 +53,56 @@ Two plugins help maintain the applications performance:
Frequent scans increase resource usage, network traffic, and database read/write cycles.
### **Optimizations**
- **Increase scan intervals** (`<PLUGIN>_RUN_SCHD`) on busy networks or low-end hardware.
- **Extend scan timeouts** (`<PLUGIN>_RUN_TIMEOUT`) to prevent failures.
- **Reduce the subnet size** e.g., from `/16` to `/24` to lower scan loads.
Some plugins have additional options to limit the number of scanned devices. If certain plugins take too long to complete, check if you can optimize scan times by selecting a scan range.
* **Increase scan intervals** (`<PLUGIN>_RUN_SCHD`) on busy networks or low-end hardware.
* **Increase timeouts** (`<PLUGIN>_RUN_TIMEOUT`) to avoid plugin failures.
* **Reduce subnet size** e.g., use `/24` instead of `/16` to reduce scan load.
For example, the **ICMP plugin** allows you to specify a regular expression to scan only IPs that match a specific pattern.
Some plugins also include options to limit which devices are scanned. If certain plugins consistently run long, consider narrowing their scope.
For example, the **ICMP plugin** allows scanning only IPs that match a specific regular expression.
---
## Storing Temporary Files in Memory
On systems with slower I/O speeds, you can optimize performance by storing temporary files in memory. This primarily applies to the `/app/api` and `/app/log` folders.
On devices with slower I/O, you can improve performance by storing temporary files (and optionally the database) in memory using `tmpfs`.
Using `tmpfs` reduces disk writes and improves performance. However, it should be **disabled** if persistent logs or API data storage are required.
> [!WARNING]
> Storing the **database** in `tmpfs` is generally discouraged. Use this only if device data and historical records are not required to persist. If needed, you can pair this setup with the `SYNC` plugin to store important persistent data on another node. See the [Plugins docs](./PLUGINS.md) for details.
Below is an optimized `docker-compose.yml` snippet:
Using `tmpfs` reduces disk writes and speeds up I/O, but **all data stored in memory will be lost on restart**.
Below is an optimized `docker-compose.yml` snippet using non-persistent logs, API data, and DB:
```yaml
version: "3"
services:
netalertx:
container_name: netalertx
# Uncomment the line below to test the latest dev image
# Use this line for the stable release
image: "ghcr.io/jokob-sk/netalertx:latest"
# Or use this line for the latest development build
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
network_mode: "host"
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/app/config
- local/path/db:/app/db
# (Optional) Useful for debugging setup issues
- local/path/logs:/app/log
# (API: OPTION 1) Store temporary files in memory (recommended for performance)
- type: tmpfs # ◀ 🔺
target: /app/api # ◀ 🔺
# (API: OPTION 2) Store API data on disk (useful for debugging)
# - local/path/api:/app/api
environment:
- TZ=Europe/Berlin
- PORT=20211
cap_drop: # Drop all capabilities for enhanced security
- ALL
cap_add: # Re-add necessary capabilities
- NET_RAW
- NET_ADMIN
- NET_BIND_SERVICE
volumes:
- ${APP_FOLDER}/netalertx/config:/data/config
- /etc/localtime:/etc/localtime:ro
tmpfs:
# All writable runtime state resides under /tmp; comment out to persist logs between restarts
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
- "/data/db:uid=20211,gid=20211,mode=1700" # ⚠ You will lose historical data on restart
environment:
- PORT=${PORT}
- APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}
```

View File

@@ -1,6 +1,6 @@
# Integration with PiHole
NetAlertX comes with 2 plugins suitable for integarting with your existing PiHole instace. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other [plugins](/docs/PLUGINS.md).
NetAlertX comes with 2 plugins suitable for integrating with your existing PiHole instance. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other [plugins](/docs/PLUGINS.md).
## Approach 1: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file

View File

@@ -64,6 +64,7 @@ Device-detecting plugins insert values into the `CurrentScan` database table. T
| `LUCIRPC` | [luci_import](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/luci_import/) | 🔍 | Import connected devices from OpenWRT | | |
| `MAINT` | [maintenance](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/maintenance/) | ⚙ | Maintenance of logs, etc. | | |
| `MQTT` | [_publisher_mqtt](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_mqtt/) | ▶️ | MQTT for synching to Home Assistant | | |
| `MTSCAN` | [mikrotik_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/mikrotik_scan/) | 🔍 | Mikrotik device import & sync | | |
| `NBTSCAN` | [nbtscan_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/nbtscan_scan/) | 🆎 | Nbtscan (NetBIOS-based) name resolution | | |
| `NEWDEV` | [newdev_template](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/newdev_template/) | ⚙ | New device template | | Yes |
| `NMAP` | [nmap_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/nmap_scan/) | ♻ | Nmap port scanning & discovery | | |
@@ -74,6 +75,7 @@ Device-detecting plugins insert values into the `CurrentScan` database table. T
| `OMDSDN` | [omada_sdn_imp](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/omada_sdn_imp/) | 📥/🆎 ❌ | UNMAINTAINED use `OMDSDNOPENAPI` | 🖧 🔄 | |
| `OMDSDNOPENAPI` | [omada_sdn_openapi](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/omada_sdn_openapi/) | 📥/🆎 | OMADA TP-Link import via OpenAPI | 🖧 | |
| `PIHOLE` | [pihole_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_scan/) | 🔍/🆎/📥 | Pi-hole device import & sync | | |
| `PIHOLEAPI` | [pihole_api_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_api_scan/) | 🔍/🆎/📥 | Pi-hole device import & sync via API v6+ | | |
| `PUSHSAFER` | [_publisher_pushsafer](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_pushsafer/) | ▶️ | Pushsafer notifications | | |
| `PUSHOVER` | [_publisher_pushover](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_pushover/) | ▶️ | Pushover notifications | | |
| `SETPWD` | [set_password](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/set_password/) | ⚙ | Set password | | Yes |

View File

@@ -1,146 +1,192 @@
## config.json Lifecycle in NetAlertX
# Plugins Implementation Details
This document describes on a high level how `config.json` is read, processed, and used by the NetAlertX core and plugins. It also outlines the plugin output contract and the main plugin types.
Plugins provide data to the NetAlertX core, which processes it to detect changes, discover new devices, raise alerts, and apply heuristics.
> [!NOTE]
> For a deep-dive on the specific configuration options and sections of the `config.json` plugin manifest, consult the [Plugins Development Guide](PLUGINS_DEV.md).
---
## Overview: Plugin Data Flow
1. Each plugin runs on a defined schedule.
2. Aligning all plugin schedules is recommended so they execute in the same loop.
3. During execution, all plugins write their collected data into the **`CurrentScan`** table.
4. After all plugins complete, the `CurrentScan` table is evaluated to detect **new devices**, **changes**, and **triggers**.
Although plugins run independently, they contribute to the shared `CurrentScan` table.
To inspect its contents, set `LOG_LEVEL=trace` and check for the log section:
```
================ CurrentScan table content ================
```
---
## `config.json` Lifecycle
This section outlines how each plugins `config.json` manifest is read, validated, and used by the core and plugins.
It also describes plugin output expectations and the main plugin categories.
> [!TIP]
> For detailed schema and examples, see the [Plugin Development Guide](PLUGINS_DEV.md).
---
### 1. Loading
* On startup, the app core loads `config.json` for each plugin.
* The `config.json` represents a plugin manifest, that contains metadata and runtime settings.
* On startup, the core loads `config.json` for each plugin.
* The file acts as a **plugin manifest**, defining metadata, runtime configuration, and database mappings.
---
### 2. Validation
* The core checks that each required settings key (such as `RUN`) for a plugin exists.
* Invalid or missing values may be replaced with defaults, or the plugin may be disabled.
* The core validates required keys (for example, `RUN`).
* Missing or invalid entries may be replaced with defaults or cause the plugin to be disabled.
---
### 3. Preparation
* The plugins settings (paths, commands, parameters) are prepared.
* Database mappings (`mapped_to_table`, `database_column_definitions`) for data ingestion into the core app are parsed.
* Plugin parameters (paths, commands, and options) are prepared for execution.
* Database mappings (`mapped_to_table`, `database_column_definitions`) are parsed to define how data integrates with the main app.
---
### 4. Execution
* Plugins can be run at different core app execution points, such as on schedule, once on start, after a notification, etc.
* At runtime, the scheduler triggers plugins according to their `interval`.
* The plugin executes its command or script.
* Plugins may run:
* On a fixed schedule.
* Once at startup.
* After a notification or other trigger.
* The scheduler executes plugins according to their `interval`.
---
### 5. Parsing
* Plugin output is expected in **pipe (`|`)-delimited format**.
* The core parses lines into fields, matching the **plugin interface contract**.
* Plugin output must be **pipe-delimited (`|`)**.
* The core parses each output line following the **Plugin Interface Contract**, splitting and mapping fields accordingly.
---
### 6. Mapping
* Each parsed field is moved into the `Plugins_` database tables and can be mapped into a configured database table.
* Controlled by `database_column_definitions` and `mapped_to_table`.
* Example: `Object_PrimaryID → Devices.MAC`.
* Parsed fields are inserted into the plugins `Plugins_*` table.
* Data can be mapped into other tables (e.g., `Devices`, `CurrentScan`) as defined by:
* `database_column_definitions`
* `mapped_to_table`
**Example:** `Object_PrimaryID → devMAC`
---
### 6a. Plugin Output Contract
Each plugin must output results in the **plugin interface contract format**, pipe (`|`)-delimited values, in the column order described under [Plugin Interface Contract](PLUGINS_DEV.md)
All plugins must follow the **Plugin Interface Contract** defined in `PLUGINS_DEV.md`.
Output values are pipe-delimited in a fixed order.
#### IDs
#### Identifiers
* `Object_PrimaryID` and `Object_SecondaryID` identify the record (e.g. `MAC|IP`).
* `Object_PrimaryID` and `Object_SecondaryID` uniquely identify records (for example, `MAC|IP`).
#### **Watched values (`Watched_Value14`)**
#### Watched Values (`Watched_Value14`)
* Used by the core to detect changes between runs.
* Changes here can trigger **notifications**.
* Used by the core to detect changes between runs.
* Changes in these fields can trigger notifications.
#### **Extra value (`Extra`)**
#### Extra Field (`Extra`)
* Optional, extra field.
* Stored in the database but **not used for alerts**.
* Optional additional value.
* Stored in the database but not used for alerts.
#### **Helper values (`Helper_Value13`)**
#### Helper Values (`Helper_Value13`)
* Added for cases where more than IDs + watched + extra are needed.
* Can be made visible in the UI.
* Stored in the database but **not used for alerts**.
* Optional auxiliary data (for display or plugin logic).
* Stored but not alert-triggering.
#### **Mapping matters**
#### Mapping
* While the plugin output is free-form, the `database_column_definitions` and `mapped_to_table` settings in `config.json` determine the **target columns and data types** in NetAlertX.
* While the output format is flexible, the plugins manifest determines the destination and type of each field.
---
### 7. Persistence
* Data is upserted into the database.
* Conflicts are resolved using `Object_PrimaryID` + `Object_SecondaryID`.
* Parsed data is **upserted** into the database.
* Conflicts are resolved using the combined key: `Object_PrimaryID + Object_SecondaryID`.
---
### 8. Plugin Types and Expected Outputs
## Plugin Categories
Beyond the `data_source` setting, plugins fall into functional categories. Each has its own input requirements and output expectations:
Plugins fall into several functional categories depending on their purpose and expected outputs.
#### **Device discovery plugins**
### 1. Device Discovery Plugins
* **Inputs:** `N/A`, subnet, or API for discovery service, or similar.
* **Outputs:** At minimum `MAC` and `IP` that results in a new or updated device records in the `Devices` table.
* **Mapping:** Must be mapped to the `CurrentScan` table via `database_column_definitions` and `data_filters`.
* **Examples:** ARP-scan, NMAP device discovery (e.g., `ARPSCAN`, `NMAPDEV`).
#### **Device-data enrichment plugins**
* **Inputs:** Device identifier (usually `MAC`, `IP`).
* **Outputs:** Additional data for that device (e.g. open ports).
* **Mapping:** Controlled via `database_column_definitions` and `data_filters`.
* **Examples:** Ports, MQTT messages (e.g., `NMAP`, `MQTT`)
#### **Name resolver plugins**
* **Inputs:** Device identifiers (MAC, IP, or hostname).
* **Outputs:** Updated `devName` and `devFQDN` fields.
* **Mapping:** Not expected.
* **Note:** Currently requires **core app modification** to add new plugins, not fully driven by the plugins `config.json`.
* **Examples:** Avahiscan (e.g., `NBTSCAN`, `NSLOOKUP`).
#### **Generic plugins**
* **Inputs:** Whatever the script or query provides.
* **Outputs:** Data shown only in **Integrations → Plugins**, not tied to devices.
* **Mapping:** Not expected.
* **Examples:** External monitoring data (e.g., `INTRSPD`)
#### **Configuration-only plugins**
* **Inputs/Outputs:** None at runtime.
* **Mapping:** Not expected.
* **Examples:** Used to provide additional settings or execute scripts (e.g., `MAINT`, `CSVBCKP`).
* **Inputs:** None, subnet, or discovery API.
* **Outputs:** `MAC` and `IP` for new or updated device records in `Devices`.
* **Mapping:** Required usually into `CurrentScan`.
* **Examples:** `ARPSCAN`, `NMAPDEV`.
---
### 9. Post-Processing
### 2. Device Data Enrichment Plugins
* Notifications are generated if watched values change.
* UI is updated with new or updated records.
* All values that are configured to be shown in teh UI appear in the Plugins section.
* **Inputs:** Device identifiers (`MAC`, `IP`).
* **Outputs:** Additional metadata (for example, open ports or sensors).
* **Mapping:** Controlled via manifest definitions.
* **Examples:** `NMAP`, `MQTT`.
---
### 10. Summary
### 3. Name Resolver Plugins
The lifecycle of `config.json` entries is:
* **Inputs:** Device identifiers (`MAC`, `IP`, hostname`).
* **Outputs:** Updated `devName` and `devFQDN`.
* **Mapping:** Typically none.
* **Note:** Adding new resolvers currently requires a core change.
* **Examples:** `NBTSCAN`, `NSLOOKUP`.
---
### 4. Generic Plugins
* **Inputs:** Custom, based on the plugin logic or script.
* **Outputs:** Data displayed under **Integrations → Plugins** only.
* **Mapping:** Not required.
* **Examples:** `INTRSPD`, custom monitoring scripts.
---
### 5. Configuration-Only Plugins
* **Inputs/Outputs:** None at runtime.
* **Purpose:** Used for configuration or maintenance tasks.
* **Examples:** `MAINT`, `CSVBCKP`.
---
## Post-Processing
After persistence:
* The core generates notifications for any watched value changes.
* The UI updates with new or modified data.
* Plugins with UI-enabled data display under **Integrations → Plugins**.
---
## Summary
The lifecycle of a plugin configuration is:
**Load → Validate → Prepare → Execute → Parse → Map → Persist → Post-process**
Plugins must follow the **output contract**, and their category (discovery, specific, resolver, generic, config-only) defines what inputs they require and what outputs are expected.
Each plugin must:
* Follow the **output contract**.
* Declare its type and expected output structure.
* Define mappings and watched values clearly in `config.json`.

View File

@@ -13,7 +13,7 @@ There is also an in-app Help / FAQ section that should be answering frequently a
#### 🐳 Docker (Fully supported)
- The main installation method is as a [docker container - follow these instructions here](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md).
- The main installation method is as a [docker container - follow these instructions here](./DOCKER_INSTALLATION.md).
#### 💻 Bare-metal / On-server (Experimental/community supported 🧪)

View File

@@ -2,21 +2,21 @@
If you are running a DNS server, such as **AdGuard**, set up **Private reverse DNS servers** for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.
> [!TIP]
> Before proceeding, ensure that [name resolution plugins](./NAME_RESOLUTION.md) are enabled.
> You can customize how names are cleaned using the `NEWDEV_NAME_CLEANUP_REGEX` setting.
> [!TIP]
> Before proceeding, ensure that [name resolution plugins](/local_data_dir/NAME_RESOLUTION.md) are enabled.
> You can customize how names are cleaned using the `NEWDEV_NAME_CLEANUP_REGEX` setting.
> To auto-update Fully Qualified Domain Names (FQDN), enable the `REFRESH_FQDN` setting.
> Example 1: Reverse DNS `disabled`
>
>
> ```
> jokob@Synology-NAS:/$ nslookup 192.168.1.58
> ** server can't find 58.1.168.192.in-addr.arpa: NXDOMAIN
> ```
> Example 2: Reverse DNS `enabled`
>
>
> ```
> jokob@Synology-NAS:/$ nslookup 192.168.1.58
> 45.1.168.192.in-addr.arpa name = jokob-NUC.localdomain.
@@ -33,22 +33,14 @@ If you are running a DNS server, such as **AdGuard**, set up **Private reverse D
### Specifying the DNS in the container
You can specify the DNS server in the docker-compose to improve name resolution on your network.
You can specify the DNS server in the docker-compose to improve name resolution on your network.
```yaml
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx:latest"
restart: unless-stopped
volumes:
- /home/netalertx/config:/app/config
- /home/netalertx/db:/app/db
- /home/netalertx/log:/app/log
environment:
- TZ=Europe/Berlin
- PORT=20211
network_mode: host
...
dns: # specifying the DNS servers used for the container
- 10.8.0.1
- 10.8.0.17
@@ -56,7 +48,7 @@ services:
### Using a custom resolv.conf file
You can configure a custom **/etc/resolv.conf** file in **docker-compose.yml** and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant [resolv.conf man](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html) entry for details.
You can configure a custom **/etc/resolv.conf** file in **docker-compose.yml** and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant [resolv.conf man](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html) entry for details.
#### docker-compose.yml:
@@ -65,22 +57,13 @@ version: "3"
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx:latest"
restart: unless-stopped
volumes:
- ./config/app.conf:/app/config/app.conf
- ./db:/app/db
- ./log:/app/log
- ./config/resolv.conf:/etc/resolv.conf # Mapping the /resolv.conf file for better name resolution
environment:
- TZ=Europe/Berlin
- PORT=20211
ports:
- "20211:20211"
network_mode: host
...
- /local_data_dir/config/resolv.conf:/etc/resolv.conf # ⚠ Mapping the /resolv.conf file for better name resolution
...
```
#### ./config/resolv.conf:
#### /local_data_dir/config/resolv.conf:
The most important below is the `nameserver` entry (you can add multiple):

View File

@@ -2,9 +2,9 @@
> Submitted by amazing [cvc90](https://github.com/cvc90) 🙏
> [!NOTE]
> [!NOTE]
> There are various NGINX config files for NetAlertX, some for the bare-metal install, currently Debian 12 and Ubuntu 24 (`netalertx.conf`), and one for the docker container (`netalertx.template.conf`).
>
>
> The first one you can find in the respective bare metal installer folder `/app/install/\<system\>/netalertx.conf`.
> The docker one can be found in the [install](https://github.com/jokob-sk/NetAlertX/tree/main/install) folder. Map, or use, the one appropriate for your setup.
@@ -17,14 +17,14 @@
2. In this file, paste the following code:
```
server {
listen 80;
server_name netalertx;
proxy_preserve_host on;
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
server {
listen 80;
server_name netalertx;
proxy_preserve_host on;
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
}
```
```
3. Activate the new website by running the following command:
@@ -43,18 +43,18 @@
2. In this file, paste the following code:
```
server {
listen 80;
server_name netalertx;
proxy_preserve_host on;
server {
listen 80;
server_name netalertx;
proxy_preserve_host on;
location ^~ /netalertx/ {
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_redirect ~^/(.*)$ /netalertx/$1;
rewrite ^/netalertx/?(.*)$ /$1 break;
rewrite ^/netalertx/?(.*)$ /$1 break;
}
}
```
```
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
@@ -73,13 +73,13 @@
2. In this file, paste the following code:
```
server {
listen 80;
server_name netalertx;
proxy_preserve_host on;
server {
listen 80;
server_name netalertx;
proxy_preserve_host on;
location ^~ /netalertx/ {
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_redirect ~^/(.*)$ /netalertx/$1;
rewrite ^/netalertx/?(.*)$ /$1 break;
sub_filter_once off;
@@ -89,13 +89,13 @@
sub_filter '(?>$host)/js' '/netalertx/js';
sub_filter '/img' '/netalertx/img';
sub_filter '/lib' '/netalertx/lib';
sub_filter '/php' '/netalertx/php';
sub_filter '/php' '/netalertx/php';
}
}
```
```
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
4. Activate the new website by running the following command:
`nginx -s reload` or `systemctl restart nginx`
@@ -111,17 +111,17 @@
2. In this file, paste the following code:
```
server {
listen 443;
server_name netalertx;
server {
listen 443;
server_name netalertx;
SSLEngine On;
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
proxy_preserve_host on;
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_preserve_host on;
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
}
```
```
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
@@ -140,23 +140,23 @@
2. In this file, paste the following code:
```
server {
listen 443;
server_name netalertx;
server {
listen 443;
server_name netalertx;
SSLEngine On;
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
location ^~ /netalertx/ {
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_redirect ~^/(.*)$ /netalertx/$1;
rewrite ^/netalertx/?(.*)$ /$1 break;
rewrite ^/netalertx/?(.*)$ /$1 break;
}
}
```
```
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
4. Activate the new website by running the following command:
`nginx -s reload` or `systemctl restart nginx`
@@ -172,15 +172,15 @@
2. In this file, paste the following code:
```
server {
listen 443;
server_name netalertx;
server {
listen 443;
server_name netalertx;
SSLEngine On;
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
location ^~ /netalertx/ {
proxy_pass http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_pass_reverse http://localhost:20211/;
proxy_redirect ~^/(.*)$ /netalertx/$1;
rewrite ^/netalertx/?(.*)$ /$1 break;
sub_filter_once off;
@@ -190,13 +190,13 @@
sub_filter '(?>$host)/js' '/netalertx/js';
sub_filter '/img' '/netalertx/img';
sub_filter '/lib' '/netalertx/lib';
sub_filter '/php' '/netalertx/php';
sub_filter '/php' '/netalertx/php';
}
}
```
```
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
4. Activate the new website by running the following command:
`nginx -s reload` or `systemctl restart nginx`
@@ -218,10 +218,10 @@
ProxyPass / http://localhost:20211/
ProxyPassReverse / http://localhost:20211/
</VirtualHost>
```
```
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
4. Activate the new website by running the following command:
`a2ensite netalertx` or `service apache2 reload`
@@ -245,10 +245,10 @@
ProxyPassReverse / http://localhost:20211/
}
</VirtualHost>
```
```
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
4. Activate the new website by running the following command:
`a2ensite netalertx` or `service apache2 reload`
@@ -273,10 +273,10 @@
ProxyPass / http://localhost:20211/
ProxyPassReverse / http://localhost:20211/
</VirtualHost>
```
```
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
4. Activate the new website by running the following command:
`a2ensite netalertx` or `service apache2 reload`
@@ -290,11 +290,11 @@
1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.
2. In this file, paste the following code:
```
<VirtualHost *:443>
<VirtualHost *:443>
ServerName netalertx
SSLEngine On
SSLEngine On
SSLCertificateFile /etc/ssl/certs/netalertx.pem
SSLCertificateKeyFile /etc/ssl/private/netalertx.key
location ^~ /netalertx/ {
@@ -303,10 +303,10 @@
ProxyPassReverse / http://localhost:20211/
}
</VirtualHost>
```
```
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
4. Activate the new website by running the following command:
`a2ensite netalertx` or `service apache2 reload`
@@ -381,7 +381,7 @@ location ^~ /netalertx/ {
> Submitted by [Isegrimm](https://github.com/Isegrimm) 🙏 (based on this [discussion](https://github.com/jokob-sk/NetAlertX/discussions/449#discussioncomment-7281442))
Assuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.
Assuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.
Note: Everything in these configs assumes '**www.domain.com**' as your domainname and '**section31**' as an arbitrary name for your certificate setup. You will have to substitute these with your own.
@@ -496,14 +496,9 @@ server {
Mapping the updated file (on the local filesystem at `/appl/docker/netalertx/default`) into the docker container:
```bash
docker run -d --rm --network=host \
--name=netalertx \
-v /appl/docker/netalertx/config:/app/config \
-v /appl/docker/netalertx/db:/app/db \
-v /appl/docker/netalertx/default:/etc/nginx/sites-available/default \
-e TZ=Europe/Amsterdam \
-e PORT=20211 \
ghcr.io/jokob-sk/netalertx:latest
```yaml
...
volumes:
- /appl/docker/netalertx/default:/etc/nginx/sites-available/default
...
```

85
docs/SECURITY_FEATURES.md Normal file
View File

@@ -0,0 +1,85 @@
# NetAlertX Security: A Layered Defense
Your network security monitor has the "keys to the kingdom," making it a prime target for attackers. If it gets compromised, the game is over.
NetAlertX is engineered from the ground up to prevent this. It's not just an app; it's a purpose-built **security appliance.** Its core design is built on a **zero-trust** philosophy, which is a modern way of saying we **assume a breach will happen** and plan for it. This isn't a single "lock on the door"; it's a **"defense-in-depth"** strategy, more like a medieval castle with a moat, high walls, and guards at every door.
Heres a breakdown of the defensive layers you get, right out of the box using the default configuration.
## Feature 1: The "Digital Concrete" Filesystem
**Methodology:** The core application and its system files are treated as immutable. Once built, the app's code is "set in concrete," preventing attackers from modifying it or planting malware.
* **Immutable Filesystem:** At runtime, the container's entire filesystem is set to `read_only: true`. The application code, system libraries, and all other files are literally frozen. This single control neutralizes a massive range of common attacks.
* **"Ownership-as-a-Lock" Pattern:** During the build, all system files are assigned to a special `readonly` user. This user has no login shell and no power to write to *any* files, even its own. Its a clever, defense-in-depth locking mechanism.
* **Data Segregation:** All user-specific data (like configurations and the device database) is stored completely outside the container in Docker volumes. The application is disposable; the data is persistent.
**What's this mean to you:** Even if an attacker gets in, they **cannot modify the application code or plant malware.** It's like the app is set in digital concrete.
## Feature 2: Surgical, "Keycard-Only" Access
**Methodology:** The principle of least privilege is strictly enforced. Every process gets only the absolute minimum set of permissions needed for its specific job.
* **Non-Privileged Execution:** The entire NetAlertX stack runs as a dedicated, low-power, non-root user (`netalertx`). No "god mode" privileges are available to the application.
* **Kernel-Level Capability Revocation:** The container is launched with `cap_drop: - ALL`, which tells the Linux kernel to revoke *all* "root-like" special powers.
* **Binary-Specific Privileges (setcap):** This is the "keycard" metaphor in action. After revoking all powers, the system uses `setcap` to grant specific, necessary permissions *only* to the binaries that absolutely require them (like `nmap` and `arp-scan`). This means that even if an attacker compromises the web server, they can't start scanning the network. The web server's "keycard" doesn't open the "scanning" door.
**What's this mean to you:** A security breach is **firewalled.** An attacker who gets into the web UI **does not have the "keycard"** to start scanning your network or take over the system. The breach is contained.
## Feature 3: Attack Surface "Amputation"
**Methodology:** The potential attack surface is aggressively minimized by removing every non-essential tool an attacker would want to use.
* **Package Manager Removal:** The `hardened` build stage explicitly deletes the Alpine package manager (`apk del apk-tools`). This makes it impossible for an attacker to simply `apk add` their malicious toolkit.
* **`sudo` Neutralization:** All `sudo` configurations are removed, and the `/usr/bin/sudo` command is replaced with a non-functional shim. Any attempt to escalate privileges this way will fail.
* **Build Toolchain Elimination:** The `Dockerfile` uses a multi-stage build. The initial "builder" stage, which contains all the powerful compilers (`gcc`) and development tools, is completely discarded. The final production image is lean and contains no build tools.
* **Minimal User & Group Files:** The `hardened` stage scrubs the system's `passwd` and `group` files, removing all default system users to minimize potential avenues for privilege escalation.
**What's this mean to you:** An attacker who breaks in finds themselves in an **empty room with no tools.** They have no `sudo` to get more power, no package manager to download weapons, and no compilers to build new ones.
## Feature 4: "Self-Cleaning" Writable Areas
**Methodology:** All writable locations are treated as untrusted, temporary, and non-executable by default.
* **In-Memory Volatile Storage:** The `docker-compose.yml` configuration maps all temporary directories (e.g., `/tmp/log`, `/tmp/api`, `/tmp`) to in-memory `tmpfs` filesystems. They do not exist on the host's disk.
* **Volatile Data:** Because these locations exist only in RAM, their contents are **instantly and irrevocably erased** when the container is stopped. This provides a "self-cleaning" mechanism that purges any attacker-dropped files or payloads on every single restart.
* **Secure Mount Flags:** These in-memory mounts are configured with the `noexec` flag. This is a critical security control: it **prohibits the execution of any binary or script** from a location that is writable.
**What's this mean to you:** Any malicious file an attacker *does* manage to drop is **written in invisible, non-permanent ink.** The file is written to RAM, not disk, so it **vaporizes the instant you restart** the container. Even worse for them, the `noexec` flag means they **can't even run the file** in the first place.
## Feature 5: Built-in Resource Guardrails
**Methodology:** The container is constrained by resource limits to function as a "good citizen" on the host system. This prevents a compromised or runaway process from consuming excessive resources, a common vector for Denial of Service (DoS) attacks.
* **Process Limiting:** The `docker-compose.yml` defines a `pids_limit: 512`. This directly mitigates "fork bomb" attacks, where a process attempts to crash the host by recursively spawning thousands of new processes.
* **Memory & CPU Limits:** The configuration file defines strict resource limits to prevent any single process from exhausting the host's available system resources.
**What's this mean to you:** NetAlertX is a "good neighbor" and **can't be used to crash your host machine.** Even if a process is compromised, it's in a digital straitjacket and **cannot** pull a "denial of service" attack by hogging all your CPU or memory.
## Feature 6: The "Pre-Flight" Self-Check
**Methodology:** Before any services start, NetAlertX runs a comprehensive "pre-flight" check to ensure its own security and configuration are sound. It's like a built-in auditor who verifies its own defenses.
* **Active Self-Diagnosis:** On every single boot, NetAlertX runs a series of startup pre-checks—and it's fast. The entire self-check process typically completes in less than a second, letting you get to the web UI in about three seconds from startup.
* **Validates Its Own Security:** These checks actively inspect the other security features. For example, `check-0-permissions.sh` validates that all the "Digital Concrete" files are locked down and all the "Self-Cleaning" areas are writable, just as they should be. It also checks that the correct `netalertx` user is running the show, not `root`.
* **Catches Misconfigurations:** This system acts as a "safety inspector" that catches misconfigurations *before* they can become security holes. If you've made a mistake in your configuration (like a bad folder permission or incorrect network mode), NetAlertX will tell you in the logs *why* it can't start, rather than just failing silently.
**What's this mean to you:** The system is **self-aware and checks its own work.** You get instant feedback if a setting is wrong, and you get peace of mind on every single boot knowing all these security layers are **active and verified,** all in about one second.
## Conclusion: Security by Default
No single security control is a silver bullet. The robust security posture of NetAlertX is achieved through **defense in depth**, layering these methodologies.
An adversary must not only gain initial access but must also find a way to write a payload to a non-executable, in-memory location, without access to any standard system tools, `sudo`, or a package manager. And they must do this while operating as an unprivileged user in a resource-limited environment where the application code is immutable and *actively checks its own integrity on every boot*.

View File

@@ -1,62 +1,64 @@
# Sessions Section in Device View
# Sessions Section Device View
The **Sessions Section** provides details about a device's connection history. This data is automatically detected and cannot be edited by the user.
The **Sessions Section** shows a devices connection history. All data is automatically detected and **cannot be edited**.
![Session info](./img/SESSION_INFO/DeviceDetails_SessionInfo.png)
![Session info](./img/SESSION_INFO/DeviceDetails_SessionInfo.png)
---
## Key Fields
1. **Date and Time of First Connection**
- **Description:** Displays the first detected connection time for the device.
- **Editability:** Uneditable (auto-detected).
- **Source:** Automatically captured when the device is first added to the system.
2. **Date and Time of Last Connection**
- **Description:** Shows the most recent time the device was online.
- **Editability:** Uneditable (auto-detected).
- **Source:** Updated with every new connection event.
3. **Offline Devices with Missing or Conflicting Data**
- **Description:** Handles cases where a device is offline but has incomplete or conflicting session data (e.g., missing start times).
- **Handling:** The system flags these cases for review and attempts to infer missing details.
| Field | Description | Editable? |
| ------------------------------ | ------------------------------------------------------------------------------------------------ | --------------- |
| **First Connection** | The first time the device was detected on the network. | ❌ Auto-detected |
| **Last Connection** | The most recent time the device was online. | ❌ Auto-detected |
---
## How Sessions are Discovered and Calculated
## How Session Information Works
### 1. Detecting New Devices
When a device is first detected in the network, the system logs it in the events table:
`INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail) SELECT cur_MAC, cur_IP, '{startTime}', 'New Device', cur_Vendor, 1 FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Devices WHERE devMac = cur_MAC)`
* New devices are automatically detected when they first appear on the network.
* A **New Device** record is created, capturing the MAC, IP, vendor, and detection time.
- Devices scanned in the current cycle (**CurrentScan**) are checked against the **Devices** table.
- If a device is new:
- A **New Device** event is logged.
- The devices MAC, IP, vendor, and detection time are recorded.
### 2. Recording Connection Sessions
### 2. Logging Connection Sessions
When a new connection is detected, the system creates a session record:
* Every time a device connects, a session entry is created.
* Captured details include:
`INSERT INTO Sessions (ses_MAC, ses_IP, ses_EventTypeConnection, ses_DateTimeConnection, ses_EventTypeDisconnection, ses_DateTimeDisconnection, ses_StillConnected, ses_AdditionalInfo) SELECT cur_MAC, cur_IP, 'Connected', '{startTime}', NULL, NULL, 1, cur_Vendor FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Sessions WHERE ses_MAC = cur_MAC)`
- A new session is logged in the **Sessions** table if no prior session exists.
- Fields like `MAC`, `IP`, `Connection Type`, and `Connection Time` are populated.
- The `Still Connected` flag is set to `1` (active connection).
* Connection type (wired or wireless)
* Connection time
* Device details (MAC, IP, vendor)
### 3. Handling Missing or Conflicting Data
- Devices with incomplete or conflicting session data (e.g., missing start times) are detected.
- The system flags these records and attempts corrections by inferring details from available data.
* **Triggers:**
Devices are flagged when session data is incomplete, inconsistent, or conflicting. Examples include:
* Missing first or last connection timestamps
* Overlapping session records
* Sessions showing a device as connected and disconnected at the same time
* **System response:**
* Automatically highlights affected devices in the **Sessions Section**.
* Attempts to **infer missing information** from available data, such as:
* Estimating first or last connection times from nearby session events
* Correcting overlapping session periods
* Reconciling conflicting connection statuses
* **User impact:**
* Users do **not** need to manually fix session data.
* The system ensures the devices connection history remains as accurate as possible for monitoring and reporting.
### 4. Updating Sessions
- When a device reconnects, its session is updated with a new connection timestamp.
- When a device disconnects:
- The **Disconnection Time** is recorded.
- The `Still Connected` flag is set to `0`.
The session information is then used to display the device presence under **Monitoring** -> **Presence**.
* **Reconnect:** Updates session with the new connection timestamp.
* **Disconnect:** Records disconnection time and marks the device as offline.
This session information feeds directly into **Monitoring → Presence**, providing a live view of which devices are currently online.
![Monitoring Device Presence](./img/SESSION_INFO/Monitoring_Presence.png)

View File

@@ -1,10 +1,10 @@
# Installation on a Synology NAS
There are different ways to install NetAlertX on a Synology, including SSH-ing into the machine and using the command line. For this guide, we will use the Project option in Container manager.
There are different ways to install NetAlertX on a Synology, including SSH-ing into the machine and using the command line. For this guide, we will use the Project option in Container manager.
## Create the folder structure
The folders you are creating below will contain the configuration and the database. Back them up regularly.
The folders you are creating below will contain the configuration and the database. Back them up regularly.
1. Create a parent folder named `netalertx`
2. Create a `db` sub-folder
@@ -29,23 +29,31 @@ The folders you are creating below will contain the configuration and the databa
- Path: `/app_storage/netalertx` (will differ from yours)
- Paste in the following template:
```yaml
version: "3"
services:
netalertx:
container_name: netalertx
# use the below line if you want to test the latest dev image
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
network_mode: "host"
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
image: "ghcr.io/jokob-sk/netalertx:latest"
network_mode: "host"
restart: unless-stopped
cap_drop: # Drop all capabilities for enhanced security
- ALL
cap_add: # Re-add necessary capabilities
- NET_RAW
- NET_ADMIN
- NET_BIND_SERVICE
volumes:
- local/path/config:/app/config
- local/path/db:/app/db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/app/log
- /app_storage/netalertx:/data
# to sync with system time
- /etc/localtime:/etc/localtime:ro
tmpfs:
# All writable runtime state resides under /tmp; comment out to persist logs between restarts
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
environment:
- TZ=Europe/Berlin
- PORT=20211
```
@@ -57,10 +65,7 @@ services:
```yaml
volumes:
- /volume1/app_storage/netalertx/config:/app/config
- /volume1/app_storage/netalertx/db:/app/db
# (optional) useful for debugging if you have issues setting up the container
# - local/path/logs:/app/log <- commented out with # ⚠
- /volume1/app_storage/netalertx:/data
```
![Adjusting docker-compose](./img/SYNOLOGY/08_Adjust_docker_compose_volumes.png)
@@ -71,4 +76,13 @@ services:
![Build](./img/SYNOLOGY/09_Run_and_build.png)
10. Navigate to `<Synology URL>:20211` (or your custom port).
11. Read the [Subnets](./SUBNETS.md) and [Plugins](/docs/PLUGINS.md) docs to complete your setup.
11. Read the [Subnets](./SUBNETS.md) and [Plugins](/docs/PLUGINS.md) docs to complete your setup.
> [!TIP]
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
>
> `sudo chown -R 20211:20211 /local_data_dir`
>
> `sudo chmod -R a+rwx /local_data_dir`
>

View File

@@ -1,7 +1,8 @@
# Docker Update Strategies to upgrade NetAlertX
> [!WARNING]
> [!WARNING]
> For versions prior to `v25.6.7` upgrade to version `v25.5.24` first (`docker pull ghcr.io/jokob-sk/netalertx:25.5.24`) as later versions don't support a full upgrade. Alternatively, devices and settings can be migrated manually, e.g. via [CSV import](./DEVICES_BULK_EDITING.md).
> See the [Migration guide](./MIGRATION.md) for details.
This guide outlines approaches for updating Docker containers, usually when upgrading to a newer version of NetAlertX. Each method offers different benefits depending on the situation. Here are the methods:
@@ -15,7 +16,7 @@ You can choose any approach that fits your workflow.
> In the examples I assume that the container name is `netalertx` and the image name is `netalertx` as well.
> [!NOTE]
> See also [Backup strategies](./BACKUPS.md) to be on the safe side.
> See also [Backup strategies](./BACKUPS.md) to be on the safe side.
## 1. Manual Updates
@@ -48,7 +49,7 @@ sudo docker-compose up --pull always -d
## 2. Dockcheck for Bulk Container Updates
Always check the [Dockcheck](https://github.com/mag37/dockcheck) docs if encountering issues with the guide below.
Always check the [Dockcheck](https://github.com/mag37/dockcheck) docs if encountering issues with the guide below.
Dockcheck is a useful tool if you have multiple containers to update and some flexibility for handling potential issues that might arise during mass updates. Dockcheck allows you to inspect each container and decide when to update.
@@ -74,7 +75,7 @@ sudo ./dockcheck.sh
## 3. Automated Updates with Watchtower
Always check the [watchtower](https://github.com/containrrr/watchtower) docs if encountering issues with the guide below.
Always check the [watchtower](https://github.com/containrrr/watchtower) docs if encountering issues with the guide below.
Watchtower monitors your Docker containers and automatically updates them when new images are available. This is ideal for ongoing updates without manual intervention.
@@ -96,7 +97,7 @@ docker run -d \
--interval 300 # Check for updates every 5 minutes
```
#### 3. Run Watchtower to update only NetAlertX:
#### 3. Run Watchtower to update only NetAlertX:
You can specify which containers to monitor by listing them. For example, to monitor netalertx only:

View File

@@ -2,7 +2,7 @@
The application uses the following default ports:
- **Web UI**: `20211`
- **Web UI**: `20211`
- **GraphQL API**: `20212`
The **Web UI** is served by an **nginx** server, while the **API backend** runs on a **Flask (Python)** server.
@@ -15,7 +15,7 @@ The **Web UI** is served by an **nginx** server, while the **API backend** runs
APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20212"}
```
For more information, check the [Docker installation guide](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md).
For more information, check the [Docker installation guide](./DOCKER_INSTALLATION.md).
## Possible issues and troubleshooting
@@ -25,8 +25,8 @@ Follow all of the below in order to disqualify potential causes of issues and to
When opening an issue or debugging:
1. Include a screenshot of what you see when accessing `HTTP://<your rpi IP>/20211` (or your custom port)
1. [Follow steps 1, 2, 3, 4 on this page](./DEBUG_TIPS.md)
1. Include a screenshot of what you see when accessing `HTTP://<your_server>:20211` (or your custom port)
1. [Follow steps 1, 2, 3, 4 on this page](./DEBUG_TIPS.md)
1. Execute the following in the container to see the processes and their ports and submit a screenshot of the result:
- `sudo apk add lsof`
- `sudo lsof -i`
@@ -36,21 +36,21 @@ When opening an issue or debugging:
![lsof ports](./img/WEB_UI_PORT_DEBUG/container_port.png)
### 2. JavaScript issues
### 2. JavaScript issues
Check for browser console (F12 browser dev console) errors + check different browsers.
### 3. Clear the app cache and cached JavaScript files
Refresh the browser cache (usually shoft + refresh), try a private window, or different browsers. Please also refresh the app cache by clicking the 🔃 (reload) button in the header of the application.
Refresh the browser cache (usually shoft + refresh), try a private window, or different browsers. Please also refresh the app cache by clicking the 🔃 (reload) button in the header of the application.
### 4. Disable proxies
If you have any reverse proxy or similar, try disabling it.
If you have any reverse proxy or similar, try disabling it.
### 5. Disable your firewall
If you are using a firewall, try to temporarily disabling it.
If you are using a firewall, try to temporarily disabling it.
### 6. Post your docker start details
@@ -62,11 +62,11 @@ In the container execute and investigate:
`cat /var/log/nginx/error.log`
`cat /app/log/app.php_errors.log`
`cat /tmp/log/app.php_errors.log`
### 8. Make sure permissions are correct
> [!TIP]
> You can try to start the container without mapping the `/app/config` and `/app/db` dirs and if the UI shows up then the issue is most likely related to your file system permissions or file ownership.
> You can try to start the container without mapping the `/data/config` and `/data/db` dirs and if the UI shows up then the issue is most likely related to your file system permissions or file ownership.
Please read the [Permissions troubleshooting guide](./FILE_PERMISSIONS.md) and provide a screesnhot of the permissions and ownership in the `/app/db` and `app/config` directories.
Please read the [Permissions troubleshooting guide](./FILE_PERMISSIONS.md) and provide a screesnhot of the permissions and ownership in the `/data/db` and `app/config` directories.

View File

@@ -1,22 +1,22 @@
# Workflows debugging and troubleshooting
> [!TIP]
> Before troubleshooting, please ensure you have [Debugging enabled](./DEBUG_TIPS.md).
> Before troubleshooting, please ensure you have the right [Debugging and LOG_LEVEL set](./DEBUG_TIPS.md).
Workflows are triggered by various events. These events are captured and listed in the _Integrations -> App Events_ section of the application.
Workflows are triggered by various events. These events are captured and listed in the _Integrations -> App Events_ section of the application.
## Troubleshooting triggers
> [!NOTE]
> Workflow events are processed once every 5 seconds. However, if a scan or other background tasks are running, this can cause a delay up to a few minutes.
> Workflow events are processed once every 5 seconds. However, if a scan or other background tasks are running, this can cause a delay up to a few minutes.
If an event doesn't trigger a workflow as expected, check the _App Events_ section for the event. You can filter these by the ID of the device (`devMAC` or `devGUID`).
If an event doesn't trigger a workflow as expected, check the _App Events_ section for the event. You can filter these by the ID of the device (`devMAC` or `devGUID`).
![App events search](./img/WORKFLOWS/workflows_app_events_search.png)
Once you find the _Event Guid_ and _Object GUID_, use them to find relevant debug entries.
Once you find the _Event Guid_ and _Object GUID_, use them to find relevant debug entries.
Navigate to _Mainetenace -> Logs_ where you can filter the logs based on the _Event or Object GUID_.
Navigate to _Mainetenace -> Logs_ where you can filter the logs based on the _Event or Object GUID_.
![Log events search](./img/WORKFLOWS/workflows_logs_search.png)
@@ -24,9 +24,9 @@ Below you can find some example `app.log` entries that will help you understand
```bash
16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Sample Device Update Workflow'
16:27:03 [WF] self.triggered 'False' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {"object_type": "Devices", "event_type": "insert"}'
16:27:03 [WF] self.triggered 'False' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {"object_type": "Devices", "event_type": "insert"}'
16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Location Change'
16:27:03 [WF] self.triggered 'True' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {"object_type": "Devices", "event_type": "update"}'
16:27:03 [WF] self.triggered 'True' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {"object_type": "Devices", "event_type": "update"}'
16:27:03 [WF] Event with GUID '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggered the workflow 'Location Change'
```

View File

@@ -0,0 +1,32 @@
# Excessive Capabilities
## Issue Description
Excessive Linux capabilities are detected beyond the necessary NET_ADMIN, NET_BIND_SERVICE, and NET_RAW. This may indicate overly permissive container configuration.
## Security Ramifications
While the detected capabilities might not directly harm operation, running with more privileges than necessary increases the attack surface. If the container is compromised, additional capabilities could allow broader system access or privilege escalation.
## Why You're Seeing This Issue
This occurs when your Docker configuration grants more capabilities than required for network monitoring. The application only needs specific network-related capabilities for proper function.
## How to Correct the Issue
Limit capabilities to only those required:
- In docker-compose.yml, specify only needed caps:
```yaml
cap_add:
- NET_RAW
- NET_ADMIN
- NET_BIND_SERVICE
```
- Remove any unnecessary `--cap-add` or `--privileged` flags from docker run commands
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

View File

@@ -0,0 +1,27 @@
# File Permission Issues
## Issue Description
NetAlertX cannot read from or write to critical configuration and database files. This prevents the application from saving data, logs, or configuration changes.
## Security Ramifications
Incorrect file permissions can expose sensitive configuration data or database contents to unauthorized access. Network monitoring tools handle sensitive information about devices on your network, and improper permissions could lead to information disclosure.
## Why You're Seeing This Issue
This occurs when the mounted volumes for configuration and database files don't have proper ownership or permissions set for the netalertx user (UID 20211). The container expects these files to be accessible by the service account, not root or other users.
## How to Correct the Issue
Fix permissions on the host system for the mounted directories:
- Ensure the config and database directories are owned by the netalertx user: `chown -R 20211:20211 /path/to/config /path/to/db`
- Set appropriate permissions: `chmod -R 755 /path/to/config /path/to/db` for directories, `chmod 644` for files
- Alternatively, restart the container with root privileges temporarily to allow automatic permission fixing, then switch back to the default user
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

View File

@@ -0,0 +1,28 @@
# Incorrect Container User
## Issue Description
NetAlertX is running as UID:GID other than the expected 20211:20211. This bypasses hardened permissions, file ownership, and runtime isolation safeguards.
## Security Ramifications
The application is designed with security hardening that depends on running under a dedicated, non-privileged service account. Using a different user account can silently fail future upgrades and removes crucial isolation between the container and host system.
## Why You're Seeing This Issue
This occurs when you override the container's default user with custom `user:` directives in docker-compose.yml or `--user` flags in docker run commands. The container expects to run as the netalertx user for proper security isolation.
## How to Correct the Issue
Restore the container to the default user:
- Remove any `user:` overrides from docker-compose.yml
- Avoid `--user` flags in docker run commands
- Allow the container to run with its default UID:GID 20211:20211
- Recreate the container so volume ownership is reset automatically
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

View File

@@ -0,0 +1,32 @@
# Missing Network Capabilities
## Issue Description
Raw network capabilities (NET_RAW, NET_ADMIN, NET_BIND_SERVICE) are missing. Tools that rely on these capabilities (e.g., nmap -sS, arp-scan, nbtscan) will not function.
## Security Ramifications
Network scanning and monitoring requires low-level network access that these capabilities provide. Without them, the application cannot perform essential functions like ARP scanning, port scanning, or passive network discovery, severely limiting its effectiveness.
## Why You're Seeing This Issue
This occurs when the container doesn't have the necessary Linux capabilities granted. Docker containers run with limited capabilities by default, and network monitoring tools need elevated network privileges.
## How to Correct the Issue
Add the required capabilities to your container:
- In docker-compose.yml:
```yaml
cap_add:
- NET_RAW
- NET_ADMIN
- NET_BIND_SERVICE
```
- For docker run: `--cap-add=NET_RAW --cap-add=NET_ADMIN --cap-add=NET_BIND_SERVICE`
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

View File

@@ -0,0 +1,36 @@
# Mount Configuration Issues
## Issue Description
NetAlertX has detected configuration issues with your Docker volume mounts. These may include write permission problems, data loss risks, or performance concerns marked with ❌ in the table.
## Security Ramifications
Improper mount configurations can lead to data loss, performance degradation, or security vulnerabilities. For persistent data (database and configuration), using non-persistent storage like tmpfs can result in complete data loss on container restart. For temporary data, using persistent storage may unnecessarily expose sensitive logs or cache data.
## Why You're Seeing This Issue
This occurs when your Docker Compose or run configuration doesn't properly map host directories to container paths, or when the mounted volumes have incorrect permissions. The application requires specific paths to be writable for operation, and some paths should use persistent storage while others should be temporary.
## How to Correct the Issue
Review and correct your volume mounts in docker-compose.yml:
- Ensure `${NETALERTX_DB}` and `${NETALERTX_CONFIG}` use persistent host directories
- Ensure `${NETALERTX_API}`, `${NETALERTX_LOG}` have appropriate permissions
- Avoid mounting sensitive paths to non-persistent filesystems like tmpfs for critical data
- Use bind mounts with proper ownership (netalertx user: 20211:20211)
Example volume configuration:
```yaml
volumes:
- ./data/db:/data/db
- ./data/config:/data/config
- ./data/log:/tmp/log
```
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

View File

@@ -0,0 +1,27 @@
# Network Mode Configuration
## Issue Description
NetAlertX is not running with `--network=host`. Bridge networking blocks passive discovery (ARP, NBNS, mDNS) and active scanning accuracy.
## Security Ramifications
Host networking is required for comprehensive network monitoring. Bridge mode isolates the container from raw network access needed for ARP scanning, passive discovery protocols, and accurate device detection. Without host networking, the application cannot fully monitor your network.
## Why You're Seeing This Issue
This occurs when your Docker configuration uses bridge networking instead of host networking. Network monitoring requires direct access to the host's network interfaces to perform passive discovery and active scanning.
## How to Correct the Issue
Enable host networking mode:
- In docker-compose.yml, add: `network_mode: host`
- For docker run, use: `--network=host`
- Ensure the container has required capabilities: `--cap-add=NET_RAW --cap-add=NET_ADMIN --cap-add=NET_BIND_SERVICE`
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

View File

@@ -0,0 +1,36 @@
# Nginx Configuration Mount Issues
## Issue Description
You've configured a custom port for NetAlertX, but the required nginx configuration mount is missing or not writable. Without this mount, the container cannot apply your port changes and will fall back to the default port 20211.
## Security Ramifications
Running in read-only mode (as recommended) prevents the container from modifying its own nginx configuration. Without a writable mount, custom port configurations cannot be applied, potentially exposing the service on unintended ports or requiring fallback to defaults.
## Why You're Seeing This Issue
This occurs when you set a custom PORT environment variable (other than 20211) but haven't provided a writable mount for nginx configuration. The container needs to write custom nginx config files when running in read-only mode.
## How to Correct the Issue
If you want to use a custom port, create a bind mount for the nginx configuration:
- Create a directory on your host: `mkdir -p /path/to/nginx-config`
- Add to your docker-compose.yml:
```yaml
volumes:
- /path/to/nginx-config:/tmp/nginx/active-config
environment:
- PORT=your_custom_port
```
- Ensure it's owned by the netalertx user: `chown -R 20211:20211 /path/to/nginx-config`
- Set permissions: `chmod -R 700 /path/to/nginx-config`
If you don't need a custom port, simply omit the PORT environment variable and the container will use 20211 by default.
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

View File

@@ -0,0 +1,86 @@
# Port Conflicts
## Issue Description
The configured application port (default 20211) or GraphQL API port (default 20212) is already in use by another service. This commonly occurs when you already have another NetAlertX instance running.
## Security Ramifications
Port conflicts prevent the application from starting properly, leaving network monitoring services unavailable. Running multiple instances on the same ports can also create configuration confusion and potential security issues if services are inadvertently exposed.
## Why You're Seeing This Issue
This error typically occurs when:
- **You already have NetAlertX running** - Another Docker container or devcontainer instance is using the default ports 20211 and 20212
- **Port conflicts with other services** - Other applications on your system are using these ports
- **Configuration error** - Both PORT and GRAPHQL_PORT environment variables are set to the same value
## How to Correct the Issue
### Check for Existing NetAlertX Instances
First, check if you already have NetAlertX running:
```bash
# Check for running NetAlertX containers
docker ps | grep netalertx
# Check for devcontainer processes
ps aux | grep netalertx
# Check what services are using the ports
netstat -tlnp | grep :20211
netstat -tlnp | grep :20212
```
### Stop Conflicting Instances
If you find another NetAlertX instance:
```bash
# Stop specific container
docker stop <container_name>
# Stop all NetAlertX containers
docker stop $(docker ps -q --filter ancestor=jokob-sk/netalertx)
# Stop devcontainer services
# Use VS Code command palette: "Dev Containers: Rebuild Container"
```
### Configure Different Ports
If you need multiple instances, configure unique ports:
```yaml
environment:
- PORT=20211 # Main application port
- GRAPHQL_PORT=20212 # GraphQL API port
```
For a second instance, use different ports:
```yaml
environment:
- PORT=20213 # Different main port
- GRAPHQL_PORT=20214 # Different API port
```
### Alternative: Use Different Container Names
When running multiple instances, use unique container names:
```yaml
services:
netalertx-primary:
# ... existing config
netalertx-secondary:
# ... config with different ports
```
## Additional Resources
Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.
For detailed Docker Compose configuration guidance, see: [DOCKER_COMPOSE.md](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md)

Some files were not shown because too many files have changed in this diff Show More