Compare commits

...

87 Commits

Author SHA1 Message Date
jokob-sk
e90fbf17d3 DOCS: Network parent
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-25 08:16:39 +11:00
jokob-sk
139447b253 BE: mylog() better code radability
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-25 07:54:17 +11:00
Jokob @NetAlertX
fa9fc2c8e3 Merge pull request #1304 from adamoutler/hadolint-fixes
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Fix Hadolint Linting Issues Across Dockerfiles
2025-11-24 17:28:50 +11:00
Adam Outler
30071c6848 Merge branch 'main' into hadolint-fixes 2025-11-23 19:25:45 -05:00
Adam Outler
b0bd3c8191 fix hadolint errors 2025-11-24 00:20:42 +00:00
Jokob @NetAlertX
c753da9e15 Merge pull request #1303 from adamoutler/shell-check-fixes
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
ShellCheck Lint: Fix All Reported Issues in Service Scripts
2025-11-24 10:24:54 +11:00
Adam Outler
4770ee5942 undo previous change for unwritable 2025-11-23 23:19:12 +00:00
Adam Outler
5cd53bc8f9 Storage permission fix 2025-11-23 22:58:45 +00:00
Adam Outler
5e47ccc9ef Shell Check fixes 2025-11-23 22:13:01 +00:00
Jokob @NetAlertX
f5d7c0f9a0 Merge pull request #1302 from adamoutler/supercronic
Replace crond with Supercronic, improve cron logging & backend restart behavior
2025-11-24 07:29:50 +11:00
Adam Outler
35b7e80be4 Remove additional "tests" from instructions. 2025-11-23 16:43:28 +00:00
Adam Outler
07eeac0a0b remove redefined variable 2025-11-23 16:38:03 +00:00
Adam Outler
240d86bf1e docker tests 2025-11-23 16:31:04 +00:00
Adam Outler
274fd50a92 Adjust healthchecks and fix docker test scripts 2025-11-23 15:56:42 +00:00
Adam Outler
bbf49c3686 Don't kill container on backend restart commanded 2025-11-23 01:27:51 +00:00
Adam Outler
e3458630ba Convert from crond to supercronic 2025-11-23 01:14:21 +00:00
Jokob @NetAlertX
2f6f1e49e9 Merge pull request #1300 from jokob-sk/linting-fixes
Some checks failed
Code checks / docker-tests (push) Has been cancelled
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
BE: linting fixes
2025-11-22 21:55:54 +11:00
Jokob @NetAlertX
4f5a40ffce lint and test fixes 2025-11-22 10:52:12 +00:00
jokob-sk
f5aea55b29 BE: linting fixes 5
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 21:30:12 +11:00
jokob-sk
e3e7e2f52e BE: linting fixes 4
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 21:20:46 +11:00
jokob-sk
872ac1ce0f BE: linting fixes 3
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 21:06:03 +11:00
jokob-sk
ebeb7a07af BE: linting fixes 2
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 20:43:36 +11:00
jokob-sk
5c14b34a8b BE: linting fixes
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-22 13:14:06 +11:00
jokob-sk
f0abd500d9 BE: test fixes
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-21 05:54:19 +11:00
jokob-sk
8503cb86f1 BE: test fixes
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-21 05:43:30 +11:00
jokob-sk
5f0b670a82 LNG: weblate add Japanese
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-21 05:28:43 +11:00
jokob-sk
9df814e351 BE: github action better code check
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-20 17:47:25 +11:00
jokob-sk
88509ce8c2 PLG: NMAPDEV better FAKE_MAC description
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-20 17:47:00 +11:00
Jokob @NetAlertX
995c371f48 Merge pull request #1299 from adamoutler/main
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
feat: docker-based testing
2025-11-20 17:40:10 +11:00
Adam Outler
aee5e04b9f fix(ci): Correct quoting in code_checks workflow (again) 2025-11-20 05:01:08 +01:00
Adam Outler
e0c96052bb fix(ci): Correct quoting in code_checks workflow 2025-11-20 04:37:35 +01:00
Adam Outler
fd5235dd0a CI Checks
Uses the new run_docker_tests.sh script which is self-contained and handles all dependencies and test execution within a Docker container. This ensures that the CI environment is consistent with the local devcontainer environment.

Fixes an issue where the job name 'test' was considered invalid. Renamed to 'docker-tests'.
Ensures that tests marked as 'feature_complete' are also excluded from the test run.
2025-11-20 04:34:59 +01:00
Adam Outler
f3de66a287 feat: Add run_docker_tests.sh for CI/CD and local testing
Introduces a comprehensive script to build, run, and test NetAlertX within a Dockerized devcontainer environment, replicating the setup defined in . This script ensures consistency for CI/CD pipelines and local development.

The script addresses several environmental challenges:
- Properly builds the  Docker image.
- Starts the container with necessary capabilities and host-gateway.
- Installs Python test dependencies (, , ) into the virtual environment.
- Executes the  script to initialize services.
- Implements a healthcheck loop to wait for services to become fully operational before running tests.
- Configures  to use a writable cache directory () to avoid permission issues.
- Includes a workaround to insert a dummy 'internet' device into the database, resolving a flakiness in  caused by its reliance on unpredictable database state without altering any project code.

This script ensures a green test suite, making it suitable for automated testing in environments like GitHub Actions.
2025-11-20 04:19:30 +01:00
Jokob @NetAlertX
9a4fb35ea5 Refine labels and descriptions in issue template
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / test (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Updated labels and descriptions for issue template fields to improve clarity and formatting.
2025-11-18 13:59:34 +11:00
Jokob @NetAlertX
a1ad904042 Enhance issue template with Docker logs instructions
Added instructions for pasting Docker logs in the issue template.
2025-11-18 13:54:59 +11:00
Jokob @NetAlertX
81ff1da756 Merge pull request #1289 from adamoutler/more-code-checks
Improve CI code checks (URL path, Python syntax, linting, tests)
2025-11-18 13:43:07 +11:00
Jokob @NetAlertX
85c9b0b99b Merge pull request #1296 from adamoutler/patch-6
Update Docker Compose documentation for volume usage
2025-11-18 13:42:27 +11:00
Adam Outler
4ccac66a73 Update Docker Compose documentation for volume usage
Clarified the preferred volume layout for NetAlertX and explained the bind mount alternative.
2025-11-17 18:31:37 -05:00
Jokob @NetAlertX
c7b9fdaff2 Merge pull request #1291 from adamoutler/test-fixes
Test fixes
2025-11-18 09:47:21 +11:00
Jokob @NetAlertX
c7dcc20a1d Merge pull request #1295 from adamoutler/main
Add VERSION file creation
2025-11-18 09:46:39 +11:00
Adam Outler
bb365a5e81 UID 20212 for read only before definition. 2025-11-17 20:57:18 +00:00
Adam Outler
e2633d0251 Update from docker v3 to v6 2025-11-17 20:54:18 +00:00
Adam Outler
09c40e76b2 No git in Dockerfile generation. 2025-11-17 20:47:11 +00:00
Adam Outler
abc3e71440 Remove redundant chown; read only version. 2025-11-17 20:45:52 +00:00
Adam Outler
d13596c35c Coderabbit suggestion 2025-11-17 20:27:27 +00:00
Adam Outler
7d5dcf061c Add VERSION file creation 2025-11-17 15:18:41 -05:00
Adam Outler
6206e483a9 Remove files that shouldn't be in PR: db.php, cron files 2025-11-17 02:57:42 +00:00
Adam Outler
f1ecc61de3 Tests Passing 2025-11-17 02:45:42 +00:00
Jokob @NetAlertX
92a6a3a916 Merge pull request #1290 from adamoutler/get-version-out-of-commits
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Add .VERSION to gitignore
2025-11-17 13:35:24 +11:00
Adam Outler
8a89f3b340 Remove VERSION file from repo and generate dynamic 2025-11-17 02:18:00 +00:00
Adam Outler
a93e87493f Update Python setup action to version 5 2025-11-16 20:33:53 -05:00
Adam Outler
c7032bceba Upgrade Python setup action and dependencies
Updated Python setup action to version 5 and specified Python version 3.11. Also modified dependencies installation to include pyyaml.
2025-11-16 20:32:08 -05:00
Adam Outler
0cd7528284 Fix cron restart 2025-11-17 00:20:08 +00:00
Adam Outler
2309b8eb3f Add linting and testing steps to workflow 2025-11-16 18:58:20 -05:00
jokob-sk
dbd1bdabc2 PLG: NMAP make param handling more robust #1288
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-16 10:16:23 +11:00
jokob-sk
093d595fc5 DOCS: path cleanup, TZ removal
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-16 09:26:18 +11:00
jokob-sk
c38758d61a PLG: PIHOLEAPI skipping invalid macs #1282
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-15 13:48:18 +11:00
jokob-sk
6034b12af6 FE: better isBase64 check
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-15 13:36:50 +11:00
jokob-sk
972654dc78 PLG: PIHOLEAPI #1282
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-15 13:36:22 +11:00
jokob-sk
ec417b0dac BE: REMOVAL dev workflow
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-14 22:33:42 +11:00
jokob-sk
2e9352dc12 BE: dev workflow
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-14 22:29:32 +11:00
Jokob @NetAlertX
566b263d0a Run Unit tests in GitHub workflows 2025-11-14 11:22:58 +00:00
Jokob @NetAlertX
61b42b4fea BE: Fixed or removed failing tests - can be re-added later 2025-11-14 11:18:56 +00:00
Jokob @NetAlertX
a45de018fb BE: Test fixes 2025-11-14 10:46:35 +00:00
Jokob @NetAlertX
bfe6987867 BE: before_name_updates change #1251 2025-11-14 10:07:47 +00:00
jokob-sk
b6567ab5fc BE: NEWDEV setting to disable IP match for names
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-13 20:22:34 +11:00
jokob-sk
f71c2fbe94 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-11-13 18:29:22 +11:00
Jokob @NetAlertX
aeb03f50ba Merge pull request #1287 from adamoutler/main
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Add missing .VERSION file
2025-11-13 13:26:49 +11:00
Adam Outler
734db423ee Add missing .VERSION file 2025-11-13 00:35:06 +00:00
Jokob @NetAlertX
4f47dbfe14 Merge pull request #1286 from adamoutler/port-fixes
Fix: Fix for ports
2025-11-13 08:23:46 +11:00
Adam Outler
d23bf45310 Merge branch 'jokob-sk:main' into port-fixes 2025-11-12 15:02:36 -05:00
Adam Outler
9c366881f1 Fix for ports 2025-11-12 12:02:31 +00:00
jokob-sk
9dd482618b DOCS: MTSCAN - mikrotik missing from docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-12 21:07:51 +11:00
HAMAD ABDULLA
84cc01566d Translated using Weblate (Arabic)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 88.0% (671 of 762 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ar/
2025-11-11 20:51:21 +00:00
jokob-sk
ac7b912b45 BE: link to server in reports #1267, new /tmp/api path for SYNC plugin
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 23:33:57 +11:00
jokob-sk
62852f1b2f BE: link to server in reports #1267
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 23:18:20 +11:00
jokob-sk
b659a0f06d BE: link to server in reports #1267
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 23:09:28 +11:00
jokob-sk
fb3620a378 BE: Better upgrade message formating
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 22:31:58 +11:00
jokob-sk
9d56e13818 FE: handling devName as number in network map #1281
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 08:16:36 +11:00
jokob-sk
43c5a11271 BE: dev workflow
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-11 07:53:19 +11:00
Jokob @NetAlertX
ac957ce599 Merge pull request #1271 from jokob-sk/next_release
Next release
2025-11-11 07:43:09 +11:00
jokob-sk
3567906fcd DOCS: migration docs
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 15:43:03 +11:00
jokob-sk
be6801d98f DOCS: migration docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 15:41:28 +11:00
jokob-sk
bb9b242d0a BE: fixing imports
Some checks failed
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 13:20:11 +11:00
jokob-sk
5f27d3b9aa BE: fixing imports
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 12:47:21 +11:00
jokob-sk
93af0e9d19 BE: fixing imports
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-10 12:45:06 +11:00
Jokob @NetAlertX
398e2a896f Merge pull request #1280 from jokob-sk/pr-1279
Pr 1279
2025-11-10 10:15:46 +11:00
187 changed files with 7242 additions and 4834 deletions

View File

@@ -35,7 +35,7 @@ RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev o
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
# together makes for a slightly smaller image size.
RUN pip install -r /tmp/requirements.txt && \
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
chmod -R u-rwx,g-rwx /opt
# second stage is the main runtime stage with just the minimum required to run the application
@@ -71,7 +71,7 @@ ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
ENV LOG_CROND=${NETALERTX_LOG}/crond.log
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
# System Services configuration files
@@ -80,11 +80,12 @@ ENV SYSTEM_SERVICES=/services
ENV SYSTEM_SERVICES_SCRIPTS=${SYSTEM_SERVICES}/scripts
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
ENV SYSTEM_NGINX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
ENV SYSTEM_NGINX_CONFIG_FILE=${SYSTEM_NGINX_CONFIG}/nginx.conf
ENV SYSTEM_NGINX_CONFIG_TEMPLATE=${SYSTEM_NGINX_CONFIG}/netalertx.conf.template
ENV SYSTEM_SERVICES_CONFIG_CRON=${SYSTEM_SERVICES_CONFIG}/cron
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
ENV SYSTEM_SERVICES_ACTIVE_CONFIG_FILE=${SYSTEM_SERVICES_ACTIVE_CONFIG}/nginx.conf
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
ENV SYSTEM_SERVICES_CROND=${SYSTEM_SERVICES_CONFIG}/crond
ENV SYSTEM_SERVICES_RUN=/tmp/run
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
@@ -118,7 +119,7 @@ ENV LANG=C.UTF-8
RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \
nmap-scripts traceroute nbtscan net-tools net-snmp-tools bind-tools awake ca-certificates \
sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session python3 envsubst \
nginx shadow && \
nginx supercronic shadow && \
rm -Rf /var/cache/apk/* && \
rm -Rf /etc/nginx && \
addgroup -g 20211 ${NETALERTX_GROUP} && \
@@ -138,6 +139,9 @@ RUN install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FO
sh -c "find ${NETALERTX_APP} -type f \( -name '*.sh' -o -name 'speedtest-cli' \) \
-exec chmod 750 {} \;"
# Copy version information into the image
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION
# Copy the virtualenv from the builder stage
COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
@@ -146,20 +150,26 @@ COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
# This is done after the copy of the venv to ensure the venv is in place
# although it may be quicker to do it before the copy, it keeps the image
# layers smaller to do it after.
RUN apk add libcap && \
RUN if [ -f '.VERSION' ]; then \
cp '.VERSION' "${NETALERTX_APP}/.VERSION"; \
else \
echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/.VERSION"; \
fi && \
chown 20212:20212 "${NETALERTX_APP}/.VERSION" && \
apk add --no-cache libcap && \
setcap cap_net_raw+ep /bin/busybox && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/arp-scan && \
setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip /usr/bin/nbtscan && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/traceroute && \
setcap cap_net_raw,cap_net_admin+eip $(readlink -f ${VIRTUAL_ENV_BIN}/python) && \
setcap cap_net_raw,cap_net_admin+eip "$(readlink -f ${VIRTUAL_ENV_BIN}/python)" && \
/bin/sh /build/init-nginx.sh && \
/bin/sh /build/init-php-fpm.sh && \
/bin/sh /build/init-crond.sh && \
/bin/sh /build/init-cron.sh && \
/bin/sh /build/init-backend.sh && \
rm -rf /build && \
apk del libcap && \
date +%s > ${NETALERTX_FRONT}/buildtimestamp.txt
date +%s > "${NETALERTX_FRONT}/buildtimestamp.txt"
ENTRYPOINT ["/bin/sh","/entrypoint.sh"]
@@ -176,13 +186,15 @@ ENV UMASK=0077
# AI may claim this is stupid, but it's actually least possible permissions as
# read-only user cannot login, cannot sudo, has no write permission, and cannot even
# read the files it owns. The read-only user is ownership-as-a-lock hardening pattern.
RUN addgroup -g 20212 ${READ_ONLY_GROUP} && \
adduser -u 20212 -G ${READ_ONLY_GROUP} -D -h /app ${READ_ONLY_USER}
RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \
adduser -u 20212 -G "${READ_ONLY_GROUP}" -D -h /app "${READ_ONLY_USER}"
# reduce permissions to minimum necessary for all NetAlertX files and folders
# Permissions 005 and 004 are not typos, they enable read-only. Everyone can
# read the read-only files, and nobody can write to them, even the readonly user.
# hadolint ignore=SC2114
RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
chmod -R 004 ${READ_ONLY_FOLDERS} && \
find ${READ_ONLY_FOLDERS} -type d -exec chmod 005 {} + && \
@@ -201,7 +213,7 @@ RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
/srv /media && \
sed -i "/^\(${READ_ONLY_USER}\|${NETALERTX_USER}\):/!d" /etc/passwd && \
sed -i "/^\(${READ_ONLY_GROUP}\|${NETALERTX_GROUP}\):/!d" /etc/group && \
echo -ne '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
printf '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
USER netalertx
@@ -220,6 +232,7 @@ HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
# Open and wide to avoid permission issues during development allowing max
# flexibility.
# hadolint ignore=DL3006
FROM runner AS netalertx-devcontainer
ENV INSTALL_DIR=/app
@@ -233,9 +246,14 @@ ENV PYDEVD_DISABLE_FILE_VALIDATION=1
COPY .devcontainer/resources/devcontainer-overlay/ /
USER root
# Install common tools, create user, and set up sudo
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest \
pytest-cov zsh alpine-zsh-config shfmt github-cli py3-yaml py3-docker-py docker-cli docker-cli-buildx \
docker-cli-compose
docker-cli-compose shellcheck
# Install hadolint (Dockerfile linter)
RUN curl -L https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64 -o /usr/local/bin/hadolint && \
chmod +x /usr/local/bin/hadolint
RUN install -d -o netalertx -g netalertx -m 755 /services/php/modules && \
cp -a /usr/lib/php83/modules/. /services/php/modules/ && \

View File

@@ -75,7 +75,9 @@
"alexcvzz.vscode-sqlite",
"mkhl.shfmt",
"charliermarsh.ruff",
"ms-python.flake8"
"ms-python.flake8",
"exiasr.hadolint",
"timonwong.shellcheck"
],
"settings": {
"terminal.integrated.cwd": "${containerWorkspaceFolder}",

View File

@@ -7,6 +7,7 @@
# Open and wide to avoid permission issues during development allowing max
# flexibility.
# hadolint ignore=DL3006
FROM runner AS netalertx-devcontainer
ENV INSTALL_DIR=/app
@@ -20,9 +21,14 @@ ENV PYDEVD_DISABLE_FILE_VALIDATION=1
COPY .devcontainer/resources/devcontainer-overlay/ /
USER root
# Install common tools, create user, and set up sudo
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest \
pytest-cov zsh alpine-zsh-config shfmt github-cli py3-yaml py3-docker-py docker-cli docker-cli-buildx \
docker-cli-compose
docker-cli-compose shellcheck
# Install hadolint (Dockerfile linter)
RUN curl -L https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64 -o /usr/local/bin/hadolint && \
chmod +x /usr/local/bin/hadolint
RUN install -d -o netalertx -g netalertx -m 755 /services/php/modules && \
cp -a /usr/lib/php83/modules/. /services/php/modules/ && \

View File

@@ -1,118 +0,0 @@
# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh
# Generated from: install/production-filesystem/services/config/nginx/netalertx.conf.template
# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;
# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;
# Configures default error logger.
error_log /tmp/log/nginx-error.log warn;
pid /tmp/run/nginx.pid;
events {
# The maximum number of simultaneous connections that can be opened by
# a worker process.
worker_connections 1024;
}
http {
# Mapping of temp paths for various nginx modules.
client_body_temp_path /tmp/nginx/client_body;
proxy_temp_path /tmp/nginx/proxy;
fastcgi_temp_path /tmp/nginx/fastcgi;
uwsgi_temp_path /tmp/nginx/uwsgi;
scgi_temp_path /tmp/nginx/scgi;
# Includes mapping of file name extensions to MIME types of responses
# and defines the default type.
include /services/config/nginx/mime.types;
default_type application/octet-stream;
# Name servers used to resolve names of upstream servers into addresses.
# It's also needed when using tcpsocket and udpsocket in Lua modules.
#resolver 1.1.1.1 1.0.0.1 [2606:4700:4700::1111] [2606:4700:4700::1001];
# Don't tell nginx version to the clients. Default is 'on'.
server_tokens off;
# Specifies the maximum accepted body size of a client request, as
# indicated by the request header Content-Length. If the stated content
# length is greater than this size, then the client receives the HTTP
# error code 413. Set to 0 to disable. Default is '1m'.
client_max_body_size 1m;
# Sendfile copies data between one FD and other from within the kernel,
# which is more efficient than read() + write(). Default is off.
sendfile on;
# Causes nginx to attempt to send its HTTP response head in one packet,
# instead of using partial frames. Default is 'off'.
tcp_nopush on;
# Enables the specified protocols. Default is TLSv1 TLSv1.1 TLSv1.2.
# TIP: If you're not obligated to support ancient clients, remove TLSv1.1.
ssl_protocols TLSv1.2 TLSv1.3;
# Path of the file with Diffie-Hellman parameters for EDH ciphers.
# TIP: Generate with: `openssl dhparam -out /etc/ssl/nginx/dh2048.pem 2048`
#ssl_dhparam /etc/ssl/nginx/dh2048.pem;
# Specifies that our cipher suits should be preferred over client ciphers.
# Default is 'off'.
ssl_prefer_server_ciphers on;
# Enables a shared SSL cache with size that can hold around 8000 sessions.
# Default is 'none'.
ssl_session_cache shared:SSL:2m;
# Specifies a time during which a client may reuse the session parameters.
# Default is '5m'.
ssl_session_timeout 1h;
# Disable TLS session tickets (they are insecure). Default is 'on'.
ssl_session_tickets off;
# Enable gzipping of responses.
gzip on;
# Set the Vary HTTP header as defined in the RFC 2616. Default is 'off'.
gzip_vary on;
# Specifies the main log format.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Sets the path, format, and configuration for a buffered log write.
access_log /tmp/log/nginx-access.log main;
# Virtual host config
server {
listen 0.0.0.0:20211 default_server;
large_client_header_buffers 4 16k;
root /app/front;
index index.php;
add_header X-Forwarded-Prefix "/app" always;
location ~* \.php$ {
# Set Cache-Control header to prevent caching on the first load
add_header Cache-Control "no-store";
fastcgi_pass unix:/tmp/run/php.sock;
include /services/config/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param PHP_VALUE "xdebug.remote_enable=1";
fastcgi_connect_timeout 75;
fastcgi_send_timeout 600;
fastcgi_read_timeout 600;
}
}
}

View File

@@ -7,56 +7,28 @@
# the final .devcontainer/Dockerfile used by the devcontainer.
echo "Generating .devcontainer/Dockerfile"
SCRIPT_DIR="$(CDPATH= cd -- "$(dirname -- "$0")" && pwd)"
SCRIPT_PATH=$(set -- "$0"; dirname -- "$1")
SCRIPT_DIR=$(cd "$SCRIPT_PATH" && pwd -P)
DEVCONTAINER_DIR="${SCRIPT_DIR%/scripts}"
ROOT_DIR="${DEVCONTAINER_DIR%/.devcontainer}"
OUT_FILE="${DEVCONTAINER_DIR}/Dockerfile"
echo "Adding base Dockerfile from $ROOT_DIR..."
echo "Adding base Dockerfile from $ROOT_DIR and merging to devcontainer-Dockerfile"
{
echo "# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh" > "$OUT_FILE"
echo "" >> "$OUT_FILE"
echo "# ---/Dockerfile---" >> "$OUT_FILE"
echo "# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh"
echo ""
echo "# ---/Dockerfile---"
cat "${ROOT_DIR}/Dockerfile" >> "$OUT_FILE"
cat "${ROOT_DIR}/Dockerfile"
echo "" >> "$OUT_FILE"
echo "# ---/resources/devcontainer-Dockerfile---" >> "$OUT_FILE"
echo "" >> "$OUT_FILE"
echo ""
echo "# ---/resources/devcontainer-Dockerfile---"
echo ""
cat "${DEVCONTAINER_DIR}/resources/devcontainer-Dockerfile"
} > "$OUT_FILE"
echo "Adding devcontainer-Dockerfile from $DEVCONTAINER_DIR/resources..."
cat "${DEVCONTAINER_DIR}/resources/devcontainer-Dockerfile" >> "$OUT_FILE"
echo "Generated $OUT_FILE using root dir $ROOT_DIR" >&2
# Generate devcontainer nginx config from production template
echo "Generating devcontainer nginx config"
NGINX_TEMPLATE="${ROOT_DIR}/install/production-filesystem/services/config/nginx/netalertx.conf.template"
NGINX_OUT="${DEVCONTAINER_DIR}/resources/devcontainer-overlay/services/config/nginx/netalertx.conf.template"
# Create output directory if it doesn't exist
mkdir -p "$(dirname "$NGINX_OUT")"
# Start with header comment
cat > "$NGINX_OUT" << 'EOF'
# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh
# Generated from: install/production-filesystem/services/config/nginx/netalertx.conf.template
EOF
# Process the template: replace listen directive and inject Xdebug params
sed 's/${LISTEN_ADDR}:${PORT}/0.0.0.0:20211/g' "$NGINX_TEMPLATE" | \
awk '
/fastcgi_param SCRIPT_NAME \$fastcgi_script_name;/ {
print $0
print ""
print " fastcgi_param PHP_VALUE \"xdebug.remote_enable=1\";"
next
}
{ print }
' >> "$NGINX_OUT"
echo "Generated $NGINX_OUT from $NGINX_TEMPLATE" >&2
echo "Generated $OUT_FILE using root dir $ROOT_DIR"
echo "Done."

View File

@@ -16,7 +16,6 @@
SOURCE_DIR=${SOURCE_DIR:-/workspaces/NetAlertX}
PY_SITE_PACKAGES="${VIRTUAL_ENV:-/opt/venv}/lib/python3.12/site-packages"
SOURCE_SERVICES_DIR="${SOURCE_DIR}/install/production-filesystem/services"
LOG_FILES=(
LOG_APP
@@ -26,7 +25,7 @@ LOG_FILES=(
LOG_EXECUTION_QUEUE
LOG_APP_PHP_ERRORS
LOG_IP_CHANGES
LOG_CROND
LOG_CRON
LOG_REPORT_OUTPUT_TXT
LOG_REPORT_OUTPUT_HTML
LOG_REPORT_OUTPUT_JSON
@@ -50,9 +49,6 @@ sudo chmod 777 /tmp/log /tmp/api /tmp/run /tmp/nginx
sudo rm -rf "${SYSTEM_NGINX_CONFIG}/conf.active"
sudo ln -s "${SYSTEM_SERVICES_ACTIVE_CONFIG}" "${SYSTEM_NGINX_CONFIG}/conf.active"
sudo rm -rf /entrypoint.d
sudo ln -s "${SOURCE_DIR}/install/production-filesystem/entrypoint.d" /entrypoint.d
@@ -67,6 +63,7 @@ for dir in \
"${SYSTEM_SERVICES_RUN_LOG}" \
"${SYSTEM_SERVICES_ACTIVE_CONFIG}" \
"${NETALERTX_PLUGINS_LOG}" \
"${SYSTEM_SERVICES_RUN_TMP}" \
"/tmp/nginx/client_body" \
"/tmp/nginx/proxy" \
"/tmp/nginx/fastcgi" \
@@ -75,9 +72,6 @@ for dir in \
sudo install -d -m 777 "${dir}"
done
# Create nginx temp subdirs with permissions
sudo mkdir -p "${SYSTEM_SERVICES_RUN_TMP}/client_body" "${SYSTEM_SERVICES_RUN_TMP}/proxy" "${SYSTEM_SERVICES_RUN_TMP}/fastcgi" "${SYSTEM_SERVICES_RUN_TMP}/uwsgi" "${SYSTEM_SERVICES_RUN_TMP}/scgi"
sudo chmod -R 777 "${SYSTEM_SERVICES_RUN_TMP}"
for var in "${LOG_FILES[@]}"; do
path=${!var}

View File

@@ -44,7 +44,7 @@ body:
required: false
- type: textarea
attributes:
label: app.conf
label: Relevant `app.conf` settings
description: |
Paste relevant `app.conf`settings (remove sensitive info)
render: python
@@ -55,7 +55,7 @@ body:
label: docker-compose.yml
description: |
Paste your `docker-compose.yml`
render: python
render: yaml
validations:
required: false
- type: dropdown
@@ -79,7 +79,11 @@ body:
required: true
- type: textarea
attributes:
label: app.log
label: Relevant `app.log` section
value: |
```
PASTE LOG HERE. Using the triple backticks preserves format.
```
description: |
Logs with debug enabled (https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md) ⚠
***Generally speaking, all bug reports should have logs provided.***
@@ -93,6 +97,10 @@ body:
label: Docker Logs
description: |
You can retrieve the logs from Portainer -> Containers -> your NetAlertX container -> Logs or by running `sudo docker logs netalertx`.
value: |
```
PASTE DOCKER LOG HERE. Using the triple backticks preserves format.
```
validations:
required: true

View File

@@ -83,3 +83,9 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
- Be sure to offer choices when appropriate.
- Always understand the intent of the user's request and undo/redo as needed.
- Above all, use the simplest possible code that meets the need so it can be easily audited and maintained.
- Always leave logging enabled. If there is a possiblity it will be difficult to debug with current logging, add more logging.
- Always run the testFailure tool before executing any tests to gather current failure information and avoid redundant runs.
- Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first.
- Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the exisiting testFailures and runTests results.
- Environment tools are designed specifically for your use in this project and running them in this order will give you the best results.

View File

@@ -21,7 +21,8 @@ jobs:
run: |
echo "🔍 Checking for incorrect absolute '/php/' URLs (should be 'php/' or './php/')..."
MATCHES=$(grep -rE "['\"]\/php\/" --include=\*.{js,php,html} ./front | grep -E "\.get|\.post|\.ajax|fetch|url\s*:") || true
MATCHES=$(grep -rE "['\"]/php/" --include=\*.{js,php,html} ./front \
| grep -E "\.get|\.post|\.ajax|fetch|url\s*:") || true
if [ -n "$MATCHES" ]; then
echo "$MATCHES"
@@ -39,3 +40,60 @@ jobs:
echo "🔍 Checking Python syntax..."
find . -name "*.py" -print0 | xargs -0 -n1 python3 -m py_compile
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install linting tools
run: |
# Python linting
pip install flake8
# Docker linting
wget -O /tmp/hadolint https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64
chmod +x /tmp/hadolint
# PHP and shellcheck for syntax checking
sudo apt-get update && sudo apt-get install -y php-cli shellcheck
- name: Shell check
continue-on-error: true
run: |
echo "🔍 Checking shell scripts..."
find . -name "*.sh" -exec shellcheck {} \;
- name: Python lint
continue-on-error: true
run: |
echo "🔍 Linting Python code..."
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: PHP check
continue-on-error: true
run: |
echo "🔍 Checking PHP syntax..."
find . -name "*.php" -exec php -l {} \;
- name: Docker lint
continue-on-error: true
run: |
echo "🔍 Linting Dockerfiles..."
/tmp/hadolint --config .hadolint.yaml Dockerfile* || true
docker-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Docker-based tests
run: |
echo "🐳 Running Docker-based tests..."
chmod +x ./test/docker_tests/run_docker_tests.sh
./test/docker_tests/run_docker_tests.sh

View File

@@ -3,12 +3,12 @@ name: docker
on:
push:
branches:
- next_release
- main
tags:
- '*.*.*'
pull_request:
branches:
- next_release
- main
jobs:
docker_dev:
@@ -83,7 +83,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v3
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6

View File

@@ -72,7 +72,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v3
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6

2
.hadolint.yaml Normal file
View File

@@ -0,0 +1,2 @@
ignored:
- DL3018

View File

@@ -32,7 +32,7 @@ RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev o
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
# together makes for a slightly smaller image size.
RUN pip install -r /tmp/requirements.txt && \
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
chmod -R u-rwx,g-rwx /opt
# second stage is the main runtime stage with just the minimum required to run the application
@@ -68,7 +68,7 @@ ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
ENV LOG_CROND=${NETALERTX_LOG}/crond.log
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
# System Services configuration files
@@ -77,11 +77,12 @@ ENV SYSTEM_SERVICES=/services
ENV SYSTEM_SERVICES_SCRIPTS=${SYSTEM_SERVICES}/scripts
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
ENV SYSTEM_NGINX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
ENV SYSTEM_NGINX_CONFIG_FILE=${SYSTEM_NGINX_CONFIG}/nginx.conf
ENV SYSTEM_NGINX_CONFIG_TEMPLATE=${SYSTEM_NGINX_CONFIG}/netalertx.conf.template
ENV SYSTEM_SERVICES_CONFIG_CRON=${SYSTEM_SERVICES_CONFIG}/cron
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
ENV SYSTEM_SERVICES_ACTIVE_CONFIG_FILE=${SYSTEM_SERVICES_ACTIVE_CONFIG}/nginx.conf
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
ENV SYSTEM_SERVICES_CROND=${SYSTEM_SERVICES_CONFIG}/crond
ENV SYSTEM_SERVICES_RUN=/tmp/run
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
@@ -115,7 +116,7 @@ ENV LANG=C.UTF-8
RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \
nmap-scripts traceroute nbtscan net-tools net-snmp-tools bind-tools awake ca-certificates \
sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session python3 envsubst \
nginx shadow && \
nginx supercronic shadow && \
rm -Rf /var/cache/apk/* && \
rm -Rf /etc/nginx && \
addgroup -g 20211 ${NETALERTX_GROUP} && \
@@ -136,7 +137,7 @@ RUN install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FO
-exec chmod 750 {} \;"
# Copy version information into the image
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .VERSION ${NETALERTX_APP}/.VERSION
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION
# Copy the virtualenv from the builder stage
COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
@@ -146,20 +147,26 @@ COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
# This is done after the copy of the venv to ensure the venv is in place
# although it may be quicker to do it before the copy, it keeps the image
# layers smaller to do it after.
RUN apk add libcap && \
RUN if [ -f '.VERSION' ]; then \
cp '.VERSION' "${NETALERTX_APP}/.VERSION"; \
else \
echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/.VERSION"; \
fi && \
chown 20212:20212 "${NETALERTX_APP}/.VERSION" && \
apk add --no-cache libcap && \
setcap cap_net_raw+ep /bin/busybox && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/arp-scan && \
setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip /usr/bin/nbtscan && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/traceroute && \
setcap cap_net_raw,cap_net_admin+eip $(readlink -f ${VIRTUAL_ENV_BIN}/python) && \
setcap cap_net_raw,cap_net_admin+eip "$(readlink -f ${VIRTUAL_ENV_BIN}/python)" && \
/bin/sh /build/init-nginx.sh && \
/bin/sh /build/init-php-fpm.sh && \
/bin/sh /build/init-crond.sh && \
/bin/sh /build/init-cron.sh && \
/bin/sh /build/init-backend.sh && \
rm -rf /build && \
apk del libcap && \
date +%s > ${NETALERTX_FRONT}/buildtimestamp.txt
date +%s > "${NETALERTX_FRONT}/buildtimestamp.txt"
ENTRYPOINT ["/bin/sh","/entrypoint.sh"]
@@ -176,13 +183,15 @@ ENV UMASK=0077
# AI may claim this is stupid, but it's actually least possible permissions as
# read-only user cannot login, cannot sudo, has no write permission, and cannot even
# read the files it owns. The read-only user is ownership-as-a-lock hardening pattern.
RUN addgroup -g 20212 ${READ_ONLY_GROUP} && \
adduser -u 20212 -G ${READ_ONLY_GROUP} -D -h /app ${READ_ONLY_USER}
RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \
adduser -u 20212 -G "${READ_ONLY_GROUP}" -D -h /app "${READ_ONLY_USER}"
# reduce permissions to minimum necessary for all NetAlertX files and folders
# Permissions 005 and 004 are not typos, they enable read-only. Everyone can
# read the read-only files, and nobody can write to them, even the readonly user.
# hadolint ignore=SC2114
RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
chmod -R 004 ${READ_ONLY_FOLDERS} && \
find ${READ_ONLY_FOLDERS} -type d -exec chmod 005 {} + && \
@@ -201,7 +210,7 @@ RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
/srv /media && \
sed -i "/^\(${READ_ONLY_USER}\|${NETALERTX_USER}\):/!d" /etc/passwd && \
sed -i "/^\(${READ_ONLY_GROUP}\|${NETALERTX_GROUP}\):/!d" /etc/group && \
echo -ne '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
printf '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
USER netalertx

View File

@@ -72,7 +72,7 @@ ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
ENV LOG_CROND=${NETALERTX_LOG}/crond.log
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
# System Services configuration files
@@ -132,25 +132,29 @@ COPY --chmod=775 --chown=${USER_ID}:${USER_GID} . ${INSTALL_DIR}/
# ❗ IMPORTANT - if you modify this file modify the /install/install_dependecies.debian.sh file as well ❗
RUN apt update && apt-get install -y \
# hadolint ignore=DL3008,DL3027
RUN apt-get update && apt-get install -y --no-install-recommends \
tini snmp ca-certificates curl libwww-perl arp-scan sudo gettext-base \
nginx-light php php-cgi php-fpm php-sqlite3 php-curl sqlite3 dnsutils net-tools \
python3 python3-dev iproute2 nmap python3-pip zip git systemctl usbutils traceroute nbtscan openrc \
busybox nginx nginx-core mtr python3-venv
busybox nginx nginx-core mtr python3-venv && \
rm -rf /var/lib/apt/lists/*
# While php8.3 is in debian bookworm repos, php-fpm is not included so we need to add sury.org repo
# (Ondřej Surý maintains php packages for debian. This is temp until debian includes php-fpm in their
# repos. Likely it will be in Debian Trixie.). This keeps the image up-to-date with the alpine version.
# hadolint ignore=DL3008
RUN apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
lsb-release \
wget && \
wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg && \
wget -q -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg && \
echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list && \
apt-get update && \
apt-get install -y php8.3-fpm php8.3-cli php8.3-sqlite3 php8.3-common php8.3-curl php8.3-cgi && \
ln -s /usr/sbin/php-fpm8.3 /usr/sbin/php-fpm83 # make it compatible with alpine version
apt-get install -y --no-install-recommends php8.3-fpm php8.3-cli php8.3-sqlite3 php8.3-common php8.3-curl php8.3-cgi && \
ln -s /usr/sbin/php-fpm8.3 /usr/sbin/php-fpm83 && \
rm -rf /var/lib/apt/lists/* # make it compatible with alpine version
# Setup virtual python environment and use pip3 to install packages
RUN python3 -m venv ${VIRTUAL_ENV} && \

View File

@@ -33,16 +33,21 @@ Get visibility of what's going on on your WIFI/LAN network and enable presence d
## 🚀 Quick Start
> [!WARNING]
> ⚠️ **Important:** The documentation has been recently updated and some instructions may have changed.
> If you are using the currently live production image, please follow the instructions on [Docker Hub](https://hub.docker.com/r/jokobsk/netalertx) for building and running the container.
> These docs reflect the latest development version and may differ from the production image.
Start NetAlertX in seconds with Docker:
```bash
docker run -d --rm --network=host \
-v local_path/config:/data/config \
-v local_path/db:/data/db \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
-v /etc/localtime:/etc/localtime \
--mount type=tmpfs,target=/tmp/api \
-e PUID=200 -e PGID=300 \
-e TZ=Europe/Berlin \
-e PORT=20211 \
-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"} \
ghcr.io/jokob-sk/netalertx:latest
```

View File

@@ -1,14 +1,17 @@
#!/bin/bash
export INSTALL_DIR=/app
LOG_FILE="${INSTALL_DIR}/log/execution_queue.log"
# Check if there are any entries with cron_restart_backend
if grep -q "cron_restart_backend" "$LOG_FILE"; then
# Restart python application using s6
s6-svc -r /var/run/s6-rc/servicedirs/netalertx
echo 'done'
if [ -f "${LOG_EXECUTION_QUEUE}" ] && grep -q "cron_restart_backend" "${LOG_EXECUTION_QUEUE}"; then
echo "$(date): Restarting backend triggered by cron_restart_backend"
killall python3 || echo "killall python3 failed or no process found"
sleep 2
/services/start-backend.sh &
# Remove all lines containing cron_restart_backend from the log file
sed -i '/cron_restart_backend/d' "$LOG_FILE"
# Atomic replacement with temp file. grep returns 1 if no lines selected (file becomes empty), which is valid here.
grep -v "cron_restart_backend" "${LOG_EXECUTION_QUEUE}" > "${LOG_EXECUTION_QUEUE}.tmp"
RC=$?
if [ $RC -eq 0 ] || [ $RC -eq 1 ]; then
mv "${LOG_EXECUTION_QUEUE}.tmp" "${LOG_EXECUTION_QUEUE}"
fi
fi

View File

@@ -52,7 +52,7 @@ query GetDevices($options: PageQueryOptionsInput) {
}
```
See also: [Debugging GraphQL issues](./DEBUG_GRAPHQL.md)
See also: [Debugging GraphQL issues](./DEBUG_API_SERVER.md)
### `curl` Command

View File

@@ -2,6 +2,15 @@
Often if the application is misconfigured the `Loading...` dialog is continuously displayed. This is most likely caused by the backed failing to start. The **Maintenance -> Logs** section should give you more details on what's happening. If there is no exception, check the Portainer log, or start the container in the foreground (without the `-d` parameter) to observe any exceptions. It's advisable to enable `trace` or `debug`. Check the [Debug tips](./DEBUG_TIPS.md) on detailed instructions.
The issue might be related to the backend server, so please check [Debugging GraphQL issues](./DEBUG_API_SERVER.md).
Please also check the browser logs (usually accessible by pressing `F12`):
1. Switch to the Console tab and refresh the page
2. Switch to teh Network tab and refresh the page
If you are not sure how to resolve the errors yourself, please post screenshots of the above into the issue, or discord discussion, where your problem is being solved.
### Incorrect SCAN_SUBNETS
One of the most common issues is not configuring `SCAN_SUBNETS` correctly. If this setting is misconfigured you will only see one or two devices in your devices list after a scan. Please read the [subnets docs](./SUBNETS.md) carefully to resolve this.

13
docs/DEBUG_GRAPHQL.md → docs/DEBUG_API_SERVER.md Executable file → Normal file
View File

@@ -12,7 +12,7 @@ As a first troubleshooting step try changing the default `GRAPHQL_PORT` setting.
Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:
![GrapQL settings](./img/DEBUG_GRAPHQL/graphql_settings_port_token.png)
![GrapQL settings](./img/DEBUG_API_SERVER/graphql_settings_port_token.png)
You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The `API_TOKEN` is used to authenticate any API calls, including GraphQL requests.
@@ -20,7 +20,7 @@ You might need to temporarily stop other applications or NetAlertX instances cau
If the UI is not accessible, you can directly edit the `app.conf` file in your `/config` folder:
![Editing app.conf](./img/DEBUG_GRAPHQL/app_conf_graphql_port.png)
![Editing app.conf](./img/DEBUG_API_SERVER/app_conf_graphql_port.png)
### Using a docker variable
@@ -29,7 +29,6 @@ All application settings can also be initialized via the `APP_CONF_OVERRIDE` doc
```yaml
...
environment:
- TZ=Europe/Berlin
- PORT=20213
- APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"}
...
@@ -43,22 +42,22 @@ There are several ways to check if the GraphQL server is running.
You can navigate to Maintenance -> Init Check to see if `isGraphQLServerRunning` is ticked:
![Init Check](./img/DEBUG_GRAPHQL/Init_check.png)
![Init Check](./img/DEBUG_API_SERVER/Init_check.png)
### Checking the Logs
You can navigate to Maintenance -> Logs and search for `graphql` to see if it started correctly and serving requests:
![GraphQL Logs](./img/DEBUG_GRAPHQL/graphql_running_logs.png)
![GraphQL Logs](./img/DEBUG_API_SERVER/graphql_running_logs.png)
### Inspecting the Browser console
In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).
![Browser Network Tab](./img/DEBUG_GRAPHQL/network_graphql.png)
![Browser Network Tab](./img/DEBUG_API_SERVER/network_graphql.png)
You can then inspect any of the POST requests by opening them in a new tab.
![Browser GraphQL Json](./img/DEBUG_GRAPHQL/dev_console_graphql_json.png)
![Browser GraphQL Json](./img/DEBUG_API_SERVER/dev_console_graphql_json.png)

View File

@@ -14,9 +14,9 @@ Start the container via the **terminal** with a command similar to this one:
```bash
docker run --rm --network=host \
-v local/path/netalertx/config:/data/config \
-v local/path/netalertx/db:/data/db \
-e TZ=Europe/Berlin \
-v /local_data_dir/netalertx/config:/data/config \
-v /local_data_dir/netalertx/db:/data/db \
-v /etc/localtime:/etc/localtime \
-e PORT=20211 \
ghcr.io/jokob-sk/netalertx:latest

View File

@@ -55,7 +55,6 @@ The file content should be following, with your custom values.
#--------------------------------
#NETALERTX
#--------------------------------
TZ=Europe/Berlin
PORT=22222 # make sure this port is unique on your whole network
DEV_LOCATION=/development/NetAlertX
APP_DATA_LOCATION=/volume/docker_appdata

View File

@@ -45,7 +45,7 @@ services:
# - /home/user/netalertx_data:/data:rw
- type: bind # Bind mount for timezone consistency
source: /etc/localtime # Alternatively add environment TZ: America/New York
source: /etc/localtime
target: /etc/localtime
read_only: true
@@ -125,15 +125,17 @@ docker compose up
### Modification 1: Use a Local Folder (Bind Mount)
By default, the baseline compose file uses "named volumes" (`netalertx_config`, `netalertx_db`). **This is the preferred method** because NetAlertX is designed to manage all configuration and database settings directly from its web UI. Named volumes let Docker handle this data cleanly without you needing to manage local file permissions or paths.
By default, the baseline compose file uses a single named volume (netalertx_data) mounted at /data. This single-volume layout is preferred because NetAlertX manages both configuration and the database under /data (for example, /data/config and /data/db) via its web UI. Using one named volume simplifies permissions and portability: Docker manages the storage and NetAlertX manages the files inside /data.
A two-volume layout that mounts /data/config and /data/db separately (for example, netalertx_config and netalertx_db) is supported for backward compatibility and some advanced workflows, but it is an abnormal/legacy layout and not recommended for new deployments.
However, if you prefer to have direct, file-level access to your configuration for manual editing, a "bind mount" is a simple alternative. This tells Docker to use a specific folder from your computer (the "host") inside the container.
**How to make the change:**
1. Choose a location on your computer. For example, `/home/adam/netalertx-files`.
1. Choose a location on your computer. For example, `/local_data_dir`.
2. Create the subfolders: `mkdir -p /home/adam/netalertx-files/config` and `mkdir -p /home/adam/netalertx-files/db`.
2. Create the subfolders: `mkdir -p /local_data_dir/config` and `mkdir -p /local_data_dir/db`.
3. Edit your `docker-compose.yml` and find the `volumes:` section (the one *inside* the `netalertx:` service).
@@ -152,19 +154,19 @@ However, if you prefer to have direct, file-level access to your configuration f
```
**After (Using a Local Folder / Bind Mount):**
Make sure to replace `/home/adam/netalertx-files` with your actual path. The format is `<path_on_your_computer>:<path_inside_container>:<options>`.
Make sure to replace `/local_data_dir` with your actual path. The format is `<path_on_your_computer>:<path_inside_container>:<options>`.
```yaml
...
volumes:
# - netalertx_config:/data/config:rw
# - netalertx_db:/data/db:rw
- /home/adam/netalertx-files/config:/data/config:rw
- /home/adam/netalertx-files/db:/data/db:rw
- /local_data_dir/config:/data/config:rw
- /local_data_dir/db:/data/db:rw
...
```
Now, any files created by NetAlertX in `/data/config` will appear in your `/home/adam/netalertx-files/config` folder.
Now, any files created by NetAlertX in `/data/config` will appear in your `/local_data_dir/config` folder.
This same method works for mounting other things, like custom plugins or enterprise NGINX files, as shown in the commented-out examples in the baseline file.
@@ -183,8 +185,8 @@ This method is useful for keeping your paths and other settings separate from yo
services:
netalertx:
environment:
- TZ=${TZ}
- PORT=${PORT}
- GRAPHQL_PORT=${GRAPHQL_PORT}
...
```
@@ -192,11 +194,9 @@ services:
**`.env` file contents:**
```sh
TZ=Europe/Paris
PORT=20211
NETALERTX_NETWORK_MODE=host
LISTEN_ADDR=0.0.0.0
PORT=20211
GRAPHQL_PORT=20212
```

View File

@@ -23,28 +23,32 @@ Head to [https://netalertx.com/](https://netalertx.com/) for more gifs and scree
> [!WARNING]
> You will have to run the container on the `host` network and specify `SCAN_SUBNETS` unless you use other [plugin scanners](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.
```yaml
```bash
docker run -d --rm --network=host \
-v local_path/config:/data/config \
-v local_path/db:/data/db \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
-v /etc/localtime:/etc/localtime \
--mount type=tmpfs,target=/tmp/api \
-e PUID=200 -e PGID=300 \
-e TZ=Europe/Berlin \
-e PORT=20211 \
-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"} \
ghcr.io/jokob-sk/netalertx:latest
```
See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md).
### Default ports
| Default | Description | How to override |
| :------------- |:-------------------------------| ----------------------------------------------------------------------------------:|
| `20211` |Port of the web interface | `-e PORT=20222` |
| `20212` |Port of the backend API server | `-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"}` or via the `GRAPHQL_PORT` Setting |
### Docker environment variables
| Variable | Description | Example Value |
| :------------- |:------------------------| -----:|
| `PORT` |Port of the web interface | `20211` |
| `PUID` |Application User UID | `102` |
| `PGID` |Application User GID | `82` |
| `LISTEN_ADDR` |Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. | `0.0.0.0` |
|`TZ` |Time zone to display stats correctly. Find your time zone [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) | `Europe/Berlin` |
|`LOADED_PLUGINS` | Default [plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) to load. Plugins cannot be loaded with `APP_CONF_OVERRIDE`, you need to use this variable instead and then specify the plugins settings with `APP_CONF_OVERRIDE`. | `["PIHOLE","ASUSWRT"]` |
|`APP_CONF_OVERRIDE` | JSON override for settings (except `LOADED_PLUGINS`). | `{"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20212"}` |
|`ALWAYS_FRESH_INSTALL` | ⚠ If `true` will delete the content of the `/db` & `/config` folders. For testing purposes. Can be coupled with [watchtower](https://github.com/containrrr/watchtower) to have an always freshly installed `netalertx`/`netalertx-dev` image. | `true` |
@@ -60,8 +64,9 @@ See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/
| :------------- | :------------- | :-------------|
| ✅ | `:/data/config` | Folder which will contain the `app.conf` & `devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
| ✅ | `:/data/db` | Folder which will contain the `app.db` database file |
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is teh same as on teh server. |
| | `:/tmp/log` | Logs folder useful for debugging if you have issues setting up the container |
| | `:/tmp/api` | A simple [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. Path configurable via `NETALERTX_API` environment variable. |
| | `:/tmp/api` | The [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. Path configurable via `NETALERTX_API` environment variable. |
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |

View File

@@ -8,12 +8,12 @@ This guide shows you how to set up **NetAlertX** using Portainers **Stacks**
## 1. Prepare Your Host
Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace `APP_FOLDER` with your preferred location, for example `/opt` here:
Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace `APP_FOLDER` with your preferred location, for example `/local_data_dir` here:
```bash
mkdir -p /opt/netalertx/config
mkdir -p /opt/netalertx/db
mkdir -p /opt/netalertx/log
mkdir -p /local_data_dir/netalertx/config
mkdir -p /local_data_dir/netalertx/db
mkdir -p /local_data_dir/netalertx/log
```
---
@@ -59,7 +59,6 @@ services:
# - ${APP_FOLDER}/netalertx/api:/tmp/api
environment:
- TZ=${TZ}
- PORT=${PORT}
- APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}
```
@@ -70,14 +69,25 @@ services:
In the **Environment variables** section of Portainer, add the following:
* `APP_FOLDER=/opt` (or wherever you created the directories in step 1)
* `TZ=Europe/Berlin` (replace with your timezone)
* `APP_FOLDER=/local_data_dir` (or wherever you created the directories in step 1)
* `PORT=22022` (or another port if needed)
* `APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22023"}` (optional advanced settings)
* `APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22023"}` (optional advanced settings, otherwise the backend API server PORT defaults to `20212`)
---
## 5. Deploy the Stack
## 5. Ensure permissions
> [!TIP]
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
> ```bash
> sudo chown -R 20211:20211 /local_data_dir
> sudo chmod -R a+rwx /local_data_dir
> ```
---
## 6. Deploy the Stack
1. Scroll down and click **Deploy the stack**.
2. Portainer will pull the image and start NetAlertX.
@@ -89,7 +99,7 @@ http://<your-docker-host-ip>:22022
---
## 6. Verify and Troubleshoot
## 7. Verify and Troubleshoot
* Check logs via Portainer → **Containers**`netalertx`**Logs**.
* Logs are stored under `${APP_FOLDER}/netalertx/log` if you enabled that volume.

View File

@@ -47,8 +47,8 @@ services:
- /mnt/YOUR_SERVER/netalertx/config:/data/config:rw
- /mnt/YOUR_SERVER/netalertx/db:/netalertx/data/db:rw
- /mnt/YOUR_SERVER/netalertx/logs:/netalertx/tmp/log:rw
- /etc/localtime:/etc/localtime:ro
environment:
- TZ=Europe/London
- PORT=20211
networks:
swarm-ipvlan:

View File

@@ -35,8 +35,8 @@ Sometimes, permission issues arise if your existing host directories were create
```bash
docker run -it --rm --name netalertx --user "0" \
-v local/path/config:/data/config \
-v local/path/db:/data/db \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
ghcr.io/jokob-sk/netalertx:latest
```
@@ -46,6 +46,13 @@ docker run -it --rm --name netalertx --user "0" \
> The container startup script detects `root` and runs `chown -R 20211:20211` on all volumes, fixing ownership for the secure `netalertx` user.
> [!TIP]
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
> ```bash
> sudo chown -R 20211:20211 /local_data_dir
> sudo chmod -R a+rwx /local_data_dir
> ```
---
## Example: docker-compose.yml with `tmpfs`
@@ -56,16 +63,18 @@ services:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx"
network_mode: "host"
cap_add:
- NET_RAW
- NET_ADMIN
- NET_BIND_SERVICE
cap_drop: # Drop all capabilities for enhanced security
- ALL
cap_add: # Add only the necessary capabilities
- NET_ADMIN # Required for ARP scanning
- NET_RAW # Required for raw socket operations
- NET_BIND_SERVICE # Required to bind to privileged ports (nbtscan)
restart: unless-stopped
volumes:
- local/path/config:/data/config
- local/path/db:/data/db
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
- /etc/localtime:/etc/localtime
environment:
- TZ=Europe/Berlin
- PORT=20211
tmpfs:
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"

View File

@@ -85,10 +85,10 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/home/pi/pialert/config
- local/path/db:/home/pi/pialert/db
- /local_data_dir/config:/home/pi/pialert/config
- /local_data_dir/db:/home/pi/pialert/db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/home/pi/pialert/front/log
- /local_data_dir/logs:/home/pi/pialert/front/log
environment:
- TZ=Europe/Berlin
- PORT=20211
@@ -104,10 +104,10 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/data/config # 🆕 This has changed
- local/path/db:/data/db # 🆕 This has changed
- /local_data_dir/config:/data/config # 🆕 This has changed
- /local_data_dir/db:/data/db # 🆕 This has changed
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/tmp/log # 🆕 This has changed
- /local_data_dir/logs:/tmp/log # 🆕 This has changed
environment:
- TZ=Europe/Berlin
- PORT=20211
@@ -131,10 +131,10 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config/pialert.conf:/home/pi/pialert/config/pialert.conf
- local/path/db/pialert.db:/home/pi/pialert/db/pialert.db
- /local_data_dir/config/pialert.conf:/home/pi/pialert/config/pialert.conf
- /local_data_dir/db/pialert.db:/home/pi/pialert/db/pialert.db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/home/pi/pialert/front/log
- /local_data_dir/logs:/home/pi/pialert/front/log
environment:
- TZ=Europe/Berlin
- PORT=20211
@@ -150,10 +150,10 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config/app.conf:/data/config/app.conf # 🆕 This has changed
- local/path/db/app.db:/data/db/app.db # 🆕 This has changed
- /local_data_dir/config/app.conf:/data/config/app.conf # 🆕 This has changed
- /local_data_dir/db/app.db:/data/db/app.db # 🆕 This has changed
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/tmp/log # 🆕 This has changed
- /local_data_dir/logs:/tmp/log # 🆕 This has changed
environment:
- TZ=Europe/Berlin
- PORT=20211
@@ -190,10 +190,10 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/data/config
- local/path/db:/data/db
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/tmp/log
- /local_data_dir/logs:/tmp/log
environment:
- TZ=Europe/Berlin
- PORT=20211
@@ -207,10 +207,10 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/data/config
- local/path/db:/data/db
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/tmp/log
- /local_data_dir/logs:/tmp/log
environment:
- TZ=Europe/Berlin
- PORT=20211
@@ -234,10 +234,10 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/data/config
- local/path/db:/data/db
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/tmp/log
- /local_data_dir/logs:/tmp/log
environment:
- TZ=Europe/Berlin
- PORT=20211
@@ -248,16 +248,24 @@ services:
6. Perform a one-off migration to the latest `netalertx` image and `20211` user:
> [!NOTE]
> The example below assumes your `/config` and `/db` folders are stored in `local/path`.
> The example below assumes your `/config` and `/db` folders are stored in `local_data_dir`.
> Replace this path with your actual configuration directory. `netalertx` is the container name, which might differ from your setup.
```sh
docker run -it --rm --name netalertx --user "0" \
-v local/path/config:/data/config \
-v local/path/db:/data/db \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
ghcr.io/jokob-sk/netalertx:latest
```
..or alternatively execute:
```bash
sudo chown -R 20211:20211 /local_data_dir/config
sudo chown -R 20211:20211 /local_data_dir/db
sudo chmod -R a+rwx /local_data_dir/
```
7. Stop the container
8. Update the `docker-compose.yml` as per example below.
@@ -265,20 +273,23 @@ docker run -it --rm --name netalertx --user "0" \
services:
netalertx:
container_name: netalertx
image: "ghcr.io/jokob-sk/netalertx" # 🆕 This is important
image: "ghcr.io/jokob-sk/netalertx" # 🆕 This is important
network_mode: "host"
cap_add: # 🆕 New line
- NET_RAW # 🆕 New line
- NET_ADMIN # 🆕 New line
- NET_BIND_SERVICE # 🆕 New line
cap_drop: # 🆕 New line
- ALL # 🆕 New line
cap_add: # 🆕 New line
- NET_RAW # 🆕 New line
- NET_ADMIN # 🆕 New line
- NET_BIND_SERVICE # 🆕 New line
restart: unless-stopped
volumes:
- local/path/config:/data/config
- local/path/db:/data/db
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# (optional) useful for debugging if you have issues setting up the container
#- local/path/logs:/tmp/log
#- /local_data_dir/logs:/tmp/log
# Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
- /etc/localtime:/etc/localtime:ro # 🆕 New line
environment:
- TZ=Europe/Berlin
- PORT=20211
# 🆕 New "tmpfs" section START 🔽
tmpfs:

View File

@@ -50,6 +50,8 @@ Lets walk through setting up a device named `raspberrypi` to act as a network
- Optionally assign a **Parent Node** (where this device connects to) and the **Relationship type** of the connection.
The `nic` relationship type can affect parent notifications — see the setting description and [Notifications documentation](./NOTIFICATIONS.md) for more.
- A devices parent MAC will be overwritten by plugins if its current value is any of the following: "null", "(unknown)" "(Unknown)".
- If you want plugins to be able to overwrite the parent value (for example, when mixing plugins that do not provide parent MACs like `ARPSCAN` with those that do, like `UNIFIAPI`), you must set the setting `NEWDEV_devParentMAC` to None.
![Device details](./img/NETWORK_TREE/Network_Device_Details_Parent.png)

View File

@@ -80,17 +80,18 @@ services:
network_mode: "host"
restart: unless-stopped
volumes:
- local/path/config:/data/config
- local/path/db:/data/db
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# (Optional) Useful for debugging setup issues
- local/path/logs:/tmp/log
- /local_data_dir/logs:/tmp/log
# (API: OPTION 1) Store temporary files in memory (recommended for performance)
- type: tmpfs # ◀ 🔺
target: /tmp/api # ◀ 🔺
# (API: OPTION 2) Store API data on disk (useful for debugging)
# - local/path/api:/tmp/api
# - /local_data_dir/api:/tmp/api
# Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
- /etc/localtime:/etc/localtime:ro
environment:
- TZ=Europe/Berlin
- PORT=20211
```

View File

@@ -64,6 +64,7 @@ Device-detecting plugins insert values into the `CurrentScan` database table. T
| `LUCIRPC` | [luci_import](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/luci_import/) | 🔍 | Import connected devices from OpenWRT | | |
| `MAINT` | [maintenance](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/maintenance/) | ⚙ | Maintenance of logs, etc. | | |
| `MQTT` | [_publisher_mqtt](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_mqtt/) | ▶️ | MQTT for synching to Home Assistant | | |
| `MTSCAN` | [mikrotik_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/mikrotik_scan/) | 🔍 | Mikrotik device import & sync | | |
| `NBTSCAN` | [nbtscan_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/nbtscan_scan/) | 🆎 | Nbtscan (NetBIOS-based) name resolution | | |
| `NEWDEV` | [newdev_template](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/newdev_template/) | ⚙ | New device template | | Yes |
| `NMAP` | [nmap_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/nmap_scan/) | ♻ | Nmap port scanning & discovery | | |
@@ -74,6 +75,7 @@ Device-detecting plugins insert values into the `CurrentScan` database table. T
| `OMDSDN` | [omada_sdn_imp](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/omada_sdn_imp/) | 📥/🆎 ❌ | UNMAINTAINED use `OMDSDNOPENAPI` | 🖧 🔄 | |
| `OMDSDNOPENAPI` | [omada_sdn_openapi](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/omada_sdn_openapi/) | 📥/🆎 | OMADA TP-Link import via OpenAPI | 🖧 | |
| `PIHOLE` | [pihole_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_scan/) | 🔍/🆎/📥 | Pi-hole device import & sync | | |
| `PIHOLEAPI` | [pihole_api_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_api_scan/) | 🔍/🆎/📥 | Pi-hole device import & sync via API v6+ | | |
| `PUSHSAFER` | [_publisher_pushsafer](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_pushsafer/) | ▶️ | Pushsafer notifications | | |
| `PUSHOVER` | [_publisher_pushover](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_pushover/) | ▶️ | Pushover notifications | | |
| `SETPWD` | [set_password](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/set_password/) | ⚙ | Set password | | Yes |

View File

@@ -3,7 +3,7 @@
If you are running a DNS server, such as **AdGuard**, set up **Private reverse DNS servers** for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.
> [!TIP]
> Before proceeding, ensure that [name resolution plugins](./NAME_RESOLUTION.md) are enabled.
> Before proceeding, ensure that [name resolution plugins](/local_data_dir/NAME_RESOLUTION.md) are enabled.
> You can customize how names are cleaned using the `NEWDEV_NAME_CLEANUP_REGEX` setting.
> To auto-update Fully Qualified Domain Names (FQDN), enable the `REFRESH_FQDN` setting.
@@ -42,11 +42,12 @@ services:
image: "ghcr.io/jokob-sk/netalertx:latest"
restart: unless-stopped
volumes:
- /home/netalertx/config:/data/config
- /home/netalertx/db:/data/db
- /home/netalertx/log:/tmp/log
- /local_data_dir/config:/data/config
- /local_data_dir/db:/data/db
# - /local_data_dir/log:/tmp/log
# Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
- /etc/localtime:/etc/localtime:ro
environment:
- TZ=Europe/Berlin
- PORT=20211
network_mode: host
dns: # specifying the DNS servers used for the container
@@ -68,19 +69,18 @@ services:
image: "ghcr.io/jokob-sk/netalertx:latest"
restart: unless-stopped
volumes:
- ./config/app.conf:/data/config/app.conf
- ./db:/data/db
- ./log:/tmp/log
- ./config/resolv.conf:/etc/resolv.conf # Mapping the /resolv.conf file for better name resolution
- /local_data_dir/config/app.conf:/data/config/app.conf
- /local_data_dir/db:/data/db
- /local_data_dir/log:/tmp/log
- /local_data_dir/config/resolv.conf:/etc/resolv.conf # Mapping the /resolv.conf file for better name resolution
# Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
- /etc/localtime:/etc/localtime:ro
environment:
- TZ=Europe/Berlin
- PORT=20211
ports:
- "20211:20211"
network_mode: host
```
#### ./config/resolv.conf:
#### /local_data_dir/config/resolv.conf:
The most important below is the `nameserver` entry (you can add multiple):

View File

@@ -501,8 +501,8 @@ docker run -d --rm --network=host \
--name=netalertx \
-v /appl/docker/netalertx/config:/data/config \
-v /appl/docker/netalertx/db:/data/db \
-v /etc/localtime:/etc/localtime \
-v /appl/docker/netalertx/default:/etc/nginx/sites-available/default \
-e TZ=Europe/Amsterdam \
-e PORT=20211 \
ghcr.io/jokob-sk/netalertx:latest

View File

@@ -44,8 +44,9 @@ services:
- local/path/db:/data/db
# (optional) useful for debugging if you have issues setting up the container
- local/path/logs:/tmp/log
# Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
- /etc/localtime:/etc/localtime:ro
environment:
- TZ=Europe/Berlin
- PORT=20211
```

View File

@@ -12,7 +12,7 @@ var timerRefreshData = ''
var emptyArr = ['undefined', "", undefined, null, 'null'];
var UI_LANG = "English (en_us)";
const allLanguages = ["ar_ar","ca_ca","cs_cz","de_de","en_us","es_es","fa_fa","fr_fr","it_it","nb_no","pl_pl","pt_br","pt_pt","ru_ru","sv_sv","tr_tr","uk_ua","zh_cn"]; // needs to be same as in lang.php
const allLanguages = ["ar_ar","ca_ca","cs_cz","de_de","en_us","es_es","fa_fa","fr_fr","it_it","ja_jp","nb_no","pl_pl","pt_br","pt_pt","ru_ru","sv_sv","tr_tr","uk_ua","zh_cn"]; // needs to be same as in lang.php
var settingsJSON = {}
@@ -343,6 +343,9 @@ function getLangCode() {
case 'Italian (it_it)':
lang_code = 'it_it';
break;
case 'Japanese (ja_jp)':
lang_code = 'ja_jp';
break;
case 'Russian (ru_ru)':
lang_code = 'ru_ru';
break;
@@ -497,11 +500,39 @@ function isValidBase64(str) {
// -------------------------------------------------------------------
// Utility function to check if the value is already Base64
function isBase64(value) {
const base64Regex =
/^(?:[A-Za-z0-9+\/]{4})*?(?:[A-Za-z0-9+\/]{2}==|[A-Za-z0-9+\/]{3}=)?$/;
return base64Regex.test(value);
if (typeof value !== "string" || value.trim() === "") return false;
// Must have valid length
if (value.length % 4 !== 0) return false;
// Valid Base64 characters
const base64Regex = /^[A-Za-z0-9+/]+={0,2}$/;
if (!base64Regex.test(value)) return false;
try {
const decoded = atob(value);
// Re-encode
const reencoded = btoa(decoded);
if (reencoded !== value) return false;
// Extra verification:
// Ensure decoding didn't silently drop bytes (atob bug)
// Encode raw bytes: check if large char codes exist (invalid UTF-16)
for (let i = 0; i < decoded.length; i++) {
const code = decoded.charCodeAt(i);
if (code > 255) return false; // invalid binary byte
}
return true;
} catch (e) {
return false;
}
}
// ----------------------------------------------------
function isValidJSON(jsonString) {
try {

View File

@@ -462,10 +462,17 @@
switch (orderTopologyBy[0]) {
case "Name":
const nameCompare = a.devName.localeCompare(b.devName);
return nameCompare !== 0 ? nameCompare : parsePort(a.devParentPort) - parsePort(b.devParentPort);
// ensuring string
const nameA = (a.devName ?? "").toString();
const nameB = (b.devName ?? "").toString();
const nameCompare = nameA.localeCompare(nameB);
return nameCompare !== 0
? nameCompare
: parsePort(a.devParentPort) - parsePort(b.devParentPort);
case "Port":
return parsePort(a.devParentPort) - parsePort(b.devParentPort);
default:
return a.rowid - b.rowid;
}

View File

@@ -107,11 +107,11 @@
"buttons": [
{
"labelStringCode": "Maint_PurgeLog",
"event": "logManage('crond.log', 'cleanLog')"
"event": "logManage('cron.log', 'cleanLog')"
}
],
"fileName": "crond.log",
"filePath": "__NETALERTX_LOG__/crond.log",
"fileName": "cron.log",
"filePath": "__NETALERTX_LOG__/cron.log",
"textAreaCssClass": "logs logs-small"
}
]

View File

@@ -274,7 +274,7 @@ function cleanLog($logFile)
$path = "";
$allowedFiles = ['app.log', 'app_front.log', 'IP_changes.log', 'stdout.log', 'stderr.log', 'app.php_errors.log', 'execution_queue.log', 'db_is_locked.log', 'nginx-error.log', 'crond.log'];
$allowedFiles = ['app.log', 'app_front.log', 'IP_changes.log', 'stdout.log', 'stderr.log', 'app.php_errors.log', 'execution_queue.log', 'db_is_locked.log', 'nginx-error.log', 'cron.log'];
if(in_array($logFile, $allowedFiles))
{

0
front/php/templates/language/ar_ar.json Executable file → Normal file
View File

View File

@@ -0,0 +1,764 @@
{
"API_CUSTOM_SQL_description": "",
"API_CUSTOM_SQL_name": "",
"API_TOKEN_description": "",
"API_TOKEN_name": "",
"API_display_name": "",
"API_icon": "",
"About_Design": "",
"About_Exit": "",
"About_Title": "",
"AppEvents_AppEventProcessed": "",
"AppEvents_DateTimeCreated": "",
"AppEvents_Extra": "",
"AppEvents_GUID": "",
"AppEvents_Helper1": "",
"AppEvents_Helper2": "",
"AppEvents_Helper3": "",
"AppEvents_ObjectForeignKey": "",
"AppEvents_ObjectIndex": "",
"AppEvents_ObjectIsArchived": "",
"AppEvents_ObjectIsNew": "",
"AppEvents_ObjectPlugin": "",
"AppEvents_ObjectPrimaryID": "",
"AppEvents_ObjectSecondaryID": "",
"AppEvents_ObjectStatus": "",
"AppEvents_ObjectStatusColumn": "",
"AppEvents_ObjectType": "",
"AppEvents_Plugin": "",
"AppEvents_Type": "",
"BackDevDetail_Actions_Ask_Run": "",
"BackDevDetail_Actions_Not_Registered": "",
"BackDevDetail_Actions_Title_Run": "",
"BackDevDetail_Copy_Ask": "",
"BackDevDetail_Copy_Title": "",
"BackDevDetail_Tools_WOL_error": "",
"BackDevDetail_Tools_WOL_okay": "",
"BackDevices_Arpscan_disabled": "",
"BackDevices_Arpscan_enabled": "",
"BackDevices_Backup_CopError": "",
"BackDevices_Backup_Failed": "",
"BackDevices_Backup_okay": "",
"BackDevices_DBTools_DelDevError_a": "",
"BackDevices_DBTools_DelDevError_b": "",
"BackDevices_DBTools_DelDev_a": "",
"BackDevices_DBTools_DelDev_b": "",
"BackDevices_DBTools_DelEvents": "",
"BackDevices_DBTools_DelEventsError": "",
"BackDevices_DBTools_ImportCSV": "",
"BackDevices_DBTools_ImportCSVError": "",
"BackDevices_DBTools_ImportCSVMissing": "",
"BackDevices_DBTools_Purge": "",
"BackDevices_DBTools_UpdDev": "",
"BackDevices_DBTools_UpdDevError": "",
"BackDevices_DBTools_Upgrade": "",
"BackDevices_DBTools_UpgradeError": "",
"BackDevices_Device_UpdDevError": "",
"BackDevices_Restore_CopError": "",
"BackDevices_Restore_Failed": "",
"BackDevices_Restore_okay": "",
"BackDevices_darkmode_disabled": "",
"BackDevices_darkmode_enabled": "",
"CLEAR_NEW_FLAG_description": "",
"CLEAR_NEW_FLAG_name": "",
"CustProps_cant_remove": "",
"DAYS_TO_KEEP_EVENTS_description": "",
"DAYS_TO_KEEP_EVENTS_name": "",
"DISCOVER_PLUGINS_description": "",
"DISCOVER_PLUGINS_name": "",
"DevDetail_Children_Title": "",
"DevDetail_Copy_Device_Title": "",
"DevDetail_Copy_Device_Tooltip": "",
"DevDetail_CustomProperties_Title": "",
"DevDetail_CustomProps_reset_info": "",
"DevDetail_DisplayFields_Title": "",
"DevDetail_EveandAl_AlertAllEvents": "",
"DevDetail_EveandAl_AlertDown": "",
"DevDetail_EveandAl_Archived": "",
"DevDetail_EveandAl_NewDevice": "",
"DevDetail_EveandAl_NewDevice_Tooltip": "",
"DevDetail_EveandAl_RandomMAC": "",
"DevDetail_EveandAl_ScanCycle": "",
"DevDetail_EveandAl_ScanCycle_a": "",
"DevDetail_EveandAl_ScanCycle_z": "",
"DevDetail_EveandAl_Skip": "",
"DevDetail_EveandAl_Title": "",
"DevDetail_Events_CheckBox": "",
"DevDetail_GoToNetworkNode": "",
"DevDetail_Icon": "",
"DevDetail_Icon_Descr": "",
"DevDetail_Loading": "",
"DevDetail_MainInfo_Comments": "",
"DevDetail_MainInfo_Favorite": "",
"DevDetail_MainInfo_Group": "",
"DevDetail_MainInfo_Location": "",
"DevDetail_MainInfo_Name": "",
"DevDetail_MainInfo_Network": "",
"DevDetail_MainInfo_Network_Port": "",
"DevDetail_MainInfo_Network_Site": "",
"DevDetail_MainInfo_Network_Title": "",
"DevDetail_MainInfo_Owner": "",
"DevDetail_MainInfo_SSID": "",
"DevDetail_MainInfo_Title": "",
"DevDetail_MainInfo_Type": "",
"DevDetail_MainInfo_Vendor": "",
"DevDetail_MainInfo_mac": "",
"DevDetail_NavToChildNode": "",
"DevDetail_Network_Node_hover": "",
"DevDetail_Network_Port_hover": "",
"DevDetail_Nmap_Scans": "",
"DevDetail_Nmap_Scans_desc": "",
"DevDetail_Nmap_buttonDefault": "",
"DevDetail_Nmap_buttonDefault_text": "",
"DevDetail_Nmap_buttonDetail": "",
"DevDetail_Nmap_buttonDetail_text": "",
"DevDetail_Nmap_buttonFast": "",
"DevDetail_Nmap_buttonFast_text": "",
"DevDetail_Nmap_buttonSkipDiscovery": "",
"DevDetail_Nmap_buttonSkipDiscovery_text": "",
"DevDetail_Nmap_resultsLink": "",
"DevDetail_Owner_hover": "",
"DevDetail_Periodselect_All": "",
"DevDetail_Periodselect_LastMonth": "",
"DevDetail_Periodselect_LastWeek": "",
"DevDetail_Periodselect_LastYear": "",
"DevDetail_Periodselect_today": "",
"DevDetail_Run_Actions_Title": "",
"DevDetail_Run_Actions_Tooltip": "",
"DevDetail_SessionInfo_FirstSession": "",
"DevDetail_SessionInfo_LastIP": "",
"DevDetail_SessionInfo_LastSession": "",
"DevDetail_SessionInfo_StaticIP": "",
"DevDetail_SessionInfo_Status": "",
"DevDetail_SessionInfo_Title": "",
"DevDetail_SessionTable_Additionalinfo": "",
"DevDetail_SessionTable_Connection": "",
"DevDetail_SessionTable_Disconnection": "",
"DevDetail_SessionTable_Duration": "",
"DevDetail_SessionTable_IP": "",
"DevDetail_SessionTable_Order": "",
"DevDetail_Shortcut_CurrentStatus": "",
"DevDetail_Shortcut_DownAlerts": "",
"DevDetail_Shortcut_Presence": "",
"DevDetail_Shortcut_Sessions": "",
"DevDetail_Tab_Details": "",
"DevDetail_Tab_Events": "",
"DevDetail_Tab_EventsTableDate": "",
"DevDetail_Tab_EventsTableEvent": "",
"DevDetail_Tab_EventsTableIP": "",
"DevDetail_Tab_EventsTableInfo": "",
"DevDetail_Tab_Nmap": "",
"DevDetail_Tab_NmapEmpty": "",
"DevDetail_Tab_NmapTableExtra": "",
"DevDetail_Tab_NmapTableHeader": "",
"DevDetail_Tab_NmapTableIndex": "",
"DevDetail_Tab_NmapTablePort": "",
"DevDetail_Tab_NmapTableService": "",
"DevDetail_Tab_NmapTableState": "",
"DevDetail_Tab_NmapTableText": "",
"DevDetail_Tab_NmapTableTime": "",
"DevDetail_Tab_Plugins": "",
"DevDetail_Tab_Presence": "",
"DevDetail_Tab_Sessions": "",
"DevDetail_Tab_Tools": "",
"DevDetail_Tab_Tools_Internet_Info_Description": "",
"DevDetail_Tab_Tools_Internet_Info_Error": "",
"DevDetail_Tab_Tools_Internet_Info_Start": "",
"DevDetail_Tab_Tools_Internet_Info_Title": "",
"DevDetail_Tab_Tools_Nslookup_Description": "",
"DevDetail_Tab_Tools_Nslookup_Error": "",
"DevDetail_Tab_Tools_Nslookup_Start": "",
"DevDetail_Tab_Tools_Nslookup_Title": "",
"DevDetail_Tab_Tools_Speedtest_Description": "",
"DevDetail_Tab_Tools_Speedtest_Start": "",
"DevDetail_Tab_Tools_Speedtest_Title": "",
"DevDetail_Tab_Tools_Traceroute_Description": "",
"DevDetail_Tab_Tools_Traceroute_Error": "",
"DevDetail_Tab_Tools_Traceroute_Start": "",
"DevDetail_Tab_Tools_Traceroute_Title": "",
"DevDetail_Tools_WOL": "",
"DevDetail_Tools_WOL_noti": "",
"DevDetail_Tools_WOL_noti_text": "",
"DevDetail_Type_hover": "",
"DevDetail_Vendor_hover": "",
"DevDetail_WOL_Title": "",
"DevDetail_button_AddIcon": "",
"DevDetail_button_AddIcon_Help": "",
"DevDetail_button_AddIcon_Tooltip": "",
"DevDetail_button_Delete": "",
"DevDetail_button_DeleteEvents": "",
"DevDetail_button_DeleteEvents_Warning": "",
"DevDetail_button_Delete_ask": "",
"DevDetail_button_OverwriteIcons": "",
"DevDetail_button_OverwriteIcons_Tooltip": "",
"DevDetail_button_OverwriteIcons_Warning": "",
"DevDetail_button_Reset": "",
"DevDetail_button_Save": "",
"DeviceEdit_ValidMacIp": "",
"Device_MultiEdit": "",
"Device_MultiEdit_Backup": "",
"Device_MultiEdit_Fields": "",
"Device_MultiEdit_MassActions": "",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "",
"Device_Searchbox": "",
"Device_Shortcut_AllDevices": "",
"Device_Shortcut_AllNodes": "",
"Device_Shortcut_Archived": "",
"Device_Shortcut_Connected": "",
"Device_Shortcut_Devices": "",
"Device_Shortcut_DownAlerts": "",
"Device_Shortcut_DownOnly": "",
"Device_Shortcut_Favorites": "",
"Device_Shortcut_NewDevices": "",
"Device_Shortcut_OnlineChart": "",
"Device_TableHead_AlertDown": "",
"Device_TableHead_Connected_Devices": "",
"Device_TableHead_CustomProps": "",
"Device_TableHead_FQDN": "",
"Device_TableHead_Favorite": "",
"Device_TableHead_FirstSession": "",
"Device_TableHead_GUID": "",
"Device_TableHead_Group": "",
"Device_TableHead_Icon": "",
"Device_TableHead_LastIP": "",
"Device_TableHead_LastIPOrder": "",
"Device_TableHead_LastSession": "",
"Device_TableHead_Location": "",
"Device_TableHead_MAC": "",
"Device_TableHead_MAC_full": "",
"Device_TableHead_Name": "",
"Device_TableHead_NetworkSite": "",
"Device_TableHead_Owner": "",
"Device_TableHead_ParentRelType": "",
"Device_TableHead_Parent_MAC": "",
"Device_TableHead_Port": "",
"Device_TableHead_PresentLastScan": "",
"Device_TableHead_ReqNicsOnline": "",
"Device_TableHead_RowID": "",
"Device_TableHead_Rowid": "",
"Device_TableHead_SSID": "",
"Device_TableHead_SourcePlugin": "",
"Device_TableHead_Status": "",
"Device_TableHead_SyncHubNodeName": "",
"Device_TableHead_Type": "",
"Device_TableHead_Vendor": "",
"Device_Table_Not_Network_Device": "",
"Device_Table_info": "",
"Device_Table_nav_next": "",
"Device_Table_nav_prev": "",
"Device_Tablelenght": "",
"Device_Tablelenght_all": "",
"Device_Title": "",
"Devices_Filters": "",
"ENABLE_PLUGINS_description": "",
"ENABLE_PLUGINS_name": "",
"ENCRYPTION_KEY_description": "",
"ENCRYPTION_KEY_name": "",
"Email_display_name": "",
"Email_icon": "",
"Events_Loading": "",
"Events_Periodselect_All": "",
"Events_Periodselect_LastMonth": "",
"Events_Periodselect_LastWeek": "",
"Events_Periodselect_LastYear": "",
"Events_Periodselect_today": "",
"Events_Searchbox": "",
"Events_Shortcut_AllEvents": "",
"Events_Shortcut_DownAlerts": "",
"Events_Shortcut_Events": "",
"Events_Shortcut_MissSessions": "",
"Events_Shortcut_NewDevices": "",
"Events_Shortcut_Sessions": "",
"Events_Shortcut_VoidSessions": "",
"Events_TableHead_AdditionalInfo": "",
"Events_TableHead_Connection": "",
"Events_TableHead_Date": "",
"Events_TableHead_Device": "",
"Events_TableHead_Disconnection": "",
"Events_TableHead_Duration": "",
"Events_TableHead_DurationOrder": "",
"Events_TableHead_EventType": "",
"Events_TableHead_IP": "",
"Events_TableHead_IPOrder": "",
"Events_TableHead_Order": "",
"Events_TableHead_Owner": "",
"Events_TableHead_PendingAlert": "",
"Events_Table_info": "",
"Events_Table_nav_next": "",
"Events_Table_nav_prev": "",
"Events_Tablelenght": "",
"Events_Tablelenght_all": "",
"Events_Title": "",
"GRAPHQL_PORT_description": "",
"GRAPHQL_PORT_name": "",
"Gen_Action": "",
"Gen_Add": "",
"Gen_AddDevice": "",
"Gen_Add_All": "",
"Gen_All_Devices": "",
"Gen_AreYouSure": "",
"Gen_Backup": "",
"Gen_Cancel": "",
"Gen_Change": "",
"Gen_Copy": "",
"Gen_CopyToClipboard": "",
"Gen_DataUpdatedUITakesTime": "",
"Gen_Delete": "",
"Gen_DeleteAll": "",
"Gen_Description": "",
"Gen_Error": "",
"Gen_Filter": "",
"Gen_Generate": "",
"Gen_InvalidMac": "",
"Gen_LockedDB": "",
"Gen_NetworkMask": "",
"Gen_Offline": "",
"Gen_Okay": "",
"Gen_Online": "",
"Gen_Purge": "",
"Gen_ReadDocs": "",
"Gen_Remove_All": "",
"Gen_Remove_Last": "",
"Gen_Reset": "",
"Gen_Restore": "",
"Gen_Run": "",
"Gen_Save": "",
"Gen_Saved": "",
"Gen_Search": "",
"Gen_Select": "",
"Gen_SelectIcon": "",
"Gen_SelectToPreview": "",
"Gen_Selected_Devices": "",
"Gen_Subnet": "",
"Gen_Switch": "",
"Gen_Upd": "",
"Gen_Upd_Fail": "",
"Gen_Update": "",
"Gen_Update_Value": "",
"Gen_ValidIcon": "",
"Gen_Warning": "",
"Gen_Work_In_Progress": "",
"Gen_create_new_device": "",
"Gen_create_new_device_info": "",
"General_display_name": "",
"General_icon": "",
"HRS_TO_KEEP_NEWDEV_description": "",
"HRS_TO_KEEP_NEWDEV_name": "",
"HRS_TO_KEEP_OFFDEV_description": "",
"HRS_TO_KEEP_OFFDEV_name": "",
"LOADED_PLUGINS_description": "",
"LOADED_PLUGINS_name": "",
"LOG_LEVEL_description": "",
"LOG_LEVEL_name": "",
"Loading": "",
"Login_Box": "",
"Login_Default_PWD": "",
"Login_Info": "",
"Login_Psw-box": "",
"Login_Psw_alert": "",
"Login_Psw_folder": "",
"Login_Psw_new": "",
"Login_Psw_run": "",
"Login_Remember": "",
"Login_Remember_small": "",
"Login_Submit": "",
"Login_Toggle_Alert_headline": "",
"Login_Toggle_Info": "",
"Login_Toggle_Info_headline": "",
"Maint_PurgeLog": "",
"Maint_RestartServer": "",
"Maint_Restart_Server_noti_text": "",
"Maintenance_InitCheck": "",
"Maintenance_InitCheck_Checking": "",
"Maintenance_InitCheck_QuickSetupGuide": "",
"Maintenance_InitCheck_Success": "",
"Maintenance_ReCheck": "",
"Maintenance_Running_Version": "",
"Maintenance_Status": "",
"Maintenance_Title": "",
"Maintenance_Tool_DownloadConfig": "",
"Maintenance_Tool_DownloadConfig_text": "",
"Maintenance_Tool_DownloadWorkflows": "",
"Maintenance_Tool_DownloadWorkflows_text": "",
"Maintenance_Tool_ExportCSV": "",
"Maintenance_Tool_ExportCSV_noti": "",
"Maintenance_Tool_ExportCSV_noti_text": "",
"Maintenance_Tool_ExportCSV_text": "",
"Maintenance_Tool_ImportCSV": "",
"Maintenance_Tool_ImportCSV_noti": "",
"Maintenance_Tool_ImportCSV_noti_text": "",
"Maintenance_Tool_ImportCSV_text": "",
"Maintenance_Tool_ImportConfig_noti": "",
"Maintenance_Tool_ImportPastedCSV": "",
"Maintenance_Tool_ImportPastedCSV_noti_text": "",
"Maintenance_Tool_ImportPastedCSV_text": "",
"Maintenance_Tool_ImportPastedConfig": "",
"Maintenance_Tool_ImportPastedConfig_noti_text": "",
"Maintenance_Tool_ImportPastedConfig_text": "",
"Maintenance_Tool_arpscansw": "",
"Maintenance_Tool_arpscansw_noti": "",
"Maintenance_Tool_arpscansw_noti_text": "",
"Maintenance_Tool_arpscansw_text": "",
"Maintenance_Tool_backup": "",
"Maintenance_Tool_backup_noti": "",
"Maintenance_Tool_backup_noti_text": "",
"Maintenance_Tool_backup_text": "",
"Maintenance_Tool_check_visible": "",
"Maintenance_Tool_darkmode": "",
"Maintenance_Tool_darkmode_noti": "",
"Maintenance_Tool_darkmode_noti_text": "",
"Maintenance_Tool_darkmode_text": "",
"Maintenance_Tool_del_ActHistory": "",
"Maintenance_Tool_del_ActHistory_noti": "",
"Maintenance_Tool_del_ActHistory_noti_text": "",
"Maintenance_Tool_del_ActHistory_text": "",
"Maintenance_Tool_del_alldev": "",
"Maintenance_Tool_del_alldev_noti": "",
"Maintenance_Tool_del_alldev_noti_text": "",
"Maintenance_Tool_del_alldev_text": "",
"Maintenance_Tool_del_allevents": "",
"Maintenance_Tool_del_allevents30": "",
"Maintenance_Tool_del_allevents30_noti": "",
"Maintenance_Tool_del_allevents30_noti_text": "",
"Maintenance_Tool_del_allevents30_text": "",
"Maintenance_Tool_del_allevents_noti": "",
"Maintenance_Tool_del_allevents_noti_text": "",
"Maintenance_Tool_del_allevents_text": "",
"Maintenance_Tool_del_empty_macs": "",
"Maintenance_Tool_del_empty_macs_noti": "",
"Maintenance_Tool_del_empty_macs_noti_text": "",
"Maintenance_Tool_del_empty_macs_text": "",
"Maintenance_Tool_del_selecteddev": "",
"Maintenance_Tool_del_selecteddev_text": "",
"Maintenance_Tool_del_unknowndev": "",
"Maintenance_Tool_del_unknowndev_noti": "",
"Maintenance_Tool_del_unknowndev_noti_text": "",
"Maintenance_Tool_del_unknowndev_text": "",
"Maintenance_Tool_displayed_columns_text": "",
"Maintenance_Tool_drag_me": "",
"Maintenance_Tool_order_columns_text": "",
"Maintenance_Tool_purgebackup": "",
"Maintenance_Tool_purgebackup_noti": "",
"Maintenance_Tool_purgebackup_noti_text": "",
"Maintenance_Tool_purgebackup_text": "",
"Maintenance_Tool_restore": "",
"Maintenance_Tool_restore_noti": "",
"Maintenance_Tool_restore_noti_text": "",
"Maintenance_Tool_restore_text": "",
"Maintenance_Tool_upgrade_database_noti": "",
"Maintenance_Tool_upgrade_database_noti_text": "",
"Maintenance_Tool_upgrade_database_text": "",
"Maintenance_Tools_Tab_BackupRestore": "",
"Maintenance_Tools_Tab_Logging": "",
"Maintenance_Tools_Tab_Settings": "",
"Maintenance_Tools_Tab_Tools": "",
"Maintenance_Tools_Tab_UISettings": "",
"Maintenance_arp_status": "",
"Maintenance_arp_status_off": "",
"Maintenance_arp_status_on": "",
"Maintenance_built_on": "",
"Maintenance_current_version": "",
"Maintenance_database_backup": "",
"Maintenance_database_backup_found": "",
"Maintenance_database_backup_total": "",
"Maintenance_database_lastmod": "",
"Maintenance_database_path": "",
"Maintenance_database_rows": "",
"Maintenance_database_size": "",
"Maintenance_lang_selector_apply": "",
"Maintenance_lang_selector_empty": "",
"Maintenance_lang_selector_lable": "",
"Maintenance_lang_selector_text": "",
"Maintenance_new_version": "",
"Maintenance_themeselector_apply": "",
"Maintenance_themeselector_empty": "",
"Maintenance_themeselector_lable": "",
"Maintenance_themeselector_text": "",
"Maintenance_version": "",
"NETWORK_DEVICE_TYPES_description": "",
"NETWORK_DEVICE_TYPES_name": "",
"Navigation_About": "",
"Navigation_AppEvents": "",
"Navigation_Devices": "",
"Navigation_Donations": "",
"Navigation_Events": "",
"Navigation_Integrations": "",
"Navigation_Maintenance": "",
"Navigation_Monitoring": "",
"Navigation_Network": "",
"Navigation_Notifications": "",
"Navigation_Plugins": "",
"Navigation_Presence": "",
"Navigation_Report": "",
"Navigation_Settings": "",
"Navigation_SystemInfo": "",
"Navigation_Workflows": "",
"Network_Assign": "",
"Network_Cant_Assign": "",
"Network_Cant_Assign_No_Node_Selected": "",
"Network_Configuration_Error": "",
"Network_Connected": "",
"Network_Devices": "",
"Network_ManageAdd": "",
"Network_ManageAdd_Name": "",
"Network_ManageAdd_Name_text": "",
"Network_ManageAdd_Port": "",
"Network_ManageAdd_Port_text": "",
"Network_ManageAdd_Submit": "",
"Network_ManageAdd_Type": "",
"Network_ManageAdd_Type_text": "",
"Network_ManageAssign": "",
"Network_ManageDel": "",
"Network_ManageDel_Name": "",
"Network_ManageDel_Name_text": "",
"Network_ManageDel_Submit": "",
"Network_ManageDevices": "",
"Network_ManageEdit": "",
"Network_ManageEdit_ID": "",
"Network_ManageEdit_ID_text": "",
"Network_ManageEdit_Name": "",
"Network_ManageEdit_Name_text": "",
"Network_ManageEdit_Port": "",
"Network_ManageEdit_Port_text": "",
"Network_ManageEdit_Submit": "",
"Network_ManageEdit_Type": "",
"Network_ManageEdit_Type_text": "",
"Network_ManageLeaf": "",
"Network_ManageUnassign": "",
"Network_NoAssignedDevices": "",
"Network_NoDevices": "",
"Network_Node": "",
"Network_Node_Name": "",
"Network_Parent": "",
"Network_Root": "",
"Network_Root_Not_Configured": "",
"Network_Root_Unconfigurable": "",
"Network_ShowArchived": "",
"Network_ShowOffline": "",
"Network_Table_Hostname": "",
"Network_Table_IP": "",
"Network_Table_State": "",
"Network_Title": "",
"Network_UnassignedDevices": "",
"Notifications_All": "",
"Notifications_Mark_All_Read": "",
"PIALERT_WEB_PASSWORD_description": "",
"PIALERT_WEB_PASSWORD_name": "",
"PIALERT_WEB_PROTECTION_description": "",
"PIALERT_WEB_PROTECTION_name": "",
"PLUGINS_KEEP_HIST_description": "",
"PLUGINS_KEEP_HIST_name": "",
"Plugins_DeleteAll": "",
"Plugins_Filters_Mac": "",
"Plugins_History": "",
"Plugins_Obj_DeleteListed": "",
"Plugins_Objects": "",
"Plugins_Out_of": "",
"Plugins_Unprocessed_Events": "",
"Plugins_no_control": "",
"Presence_CalHead_day": "",
"Presence_CalHead_lang": "",
"Presence_CalHead_month": "",
"Presence_CalHead_quarter": "",
"Presence_CalHead_week": "",
"Presence_CalHead_year": "",
"Presence_CallHead_Devices": "",
"Presence_Key_OnlineNow": "",
"Presence_Key_OnlineNow_desc": "",
"Presence_Key_OnlinePast": "",
"Presence_Key_OnlinePastMiss": "",
"Presence_Key_OnlinePastMiss_desc": "",
"Presence_Key_OnlinePast_desc": "",
"Presence_Loading": "",
"Presence_Shortcut_AllDevices": "",
"Presence_Shortcut_Archived": "",
"Presence_Shortcut_Connected": "",
"Presence_Shortcut_Devices": "",
"Presence_Shortcut_DownAlerts": "",
"Presence_Shortcut_Favorites": "",
"Presence_Shortcut_NewDevices": "",
"Presence_Title": "",
"REFRESH_FQDN_description": "",
"REFRESH_FQDN_name": "",
"REPORT_DASHBOARD_URL_description": "",
"REPORT_DASHBOARD_URL_name": "",
"REPORT_ERROR": "",
"REPORT_MAIL_description": "",
"REPORT_MAIL_name": "",
"REPORT_TITLE": "",
"RandomMAC_hover": "",
"Reports_Sent_Log": "",
"SCAN_SUBNETS_description": "",
"SCAN_SUBNETS_name": "",
"SYSTEM_TITLE": "",
"Setting_Override": "",
"Setting_Override_Description": "",
"Settings_Metadata_Toggle": "",
"Settings_Show_Description": "",
"Settings_device_Scanners_desync": "",
"Settings_device_Scanners_desync_popup": "",
"Speedtest_Results": "",
"Systeminfo_AvailableIps": "",
"Systeminfo_CPU": "",
"Systeminfo_CPU_Cores": "",
"Systeminfo_CPU_Name": "",
"Systeminfo_CPU_Speed": "",
"Systeminfo_CPU_Temp": "",
"Systeminfo_CPU_Vendor": "",
"Systeminfo_Client_Resolution": "",
"Systeminfo_Client_User_Agent": "",
"Systeminfo_General": "",
"Systeminfo_General_Date": "",
"Systeminfo_General_Date2": "",
"Systeminfo_General_Full_Date": "",
"Systeminfo_General_TimeZone": "",
"Systeminfo_Memory": "",
"Systeminfo_Memory_Total_Memory": "",
"Systeminfo_Memory_Usage": "",
"Systeminfo_Memory_Usage_Percent": "",
"Systeminfo_Motherboard": "",
"Systeminfo_Motherboard_BIOS": "",
"Systeminfo_Motherboard_BIOS_Date": "",
"Systeminfo_Motherboard_BIOS_Vendor": "",
"Systeminfo_Motherboard_Manufactured": "",
"Systeminfo_Motherboard_Name": "",
"Systeminfo_Motherboard_Revision": "",
"Systeminfo_Network": "",
"Systeminfo_Network_Accept_Encoding": "",
"Systeminfo_Network_Accept_Language": "",
"Systeminfo_Network_Connection_Port": "",
"Systeminfo_Network_HTTP_Host": "",
"Systeminfo_Network_HTTP_Referer": "",
"Systeminfo_Network_HTTP_Referer_String": "",
"Systeminfo_Network_Hardware": "",
"Systeminfo_Network_Hardware_Interface_Mask": "",
"Systeminfo_Network_Hardware_Interface_Name": "",
"Systeminfo_Network_Hardware_Interface_RX": "",
"Systeminfo_Network_Hardware_Interface_TX": "",
"Systeminfo_Network_IP": "",
"Systeminfo_Network_IP_Connection": "",
"Systeminfo_Network_IP_Server": "",
"Systeminfo_Network_MIME": "",
"Systeminfo_Network_Request_Method": "",
"Systeminfo_Network_Request_Time": "",
"Systeminfo_Network_Request_URI": "",
"Systeminfo_Network_Secure_Connection": "",
"Systeminfo_Network_Secure_Connection_String": "",
"Systeminfo_Network_Server_Name": "",
"Systeminfo_Network_Server_Name_String": "",
"Systeminfo_Network_Server_Query": "",
"Systeminfo_Network_Server_Query_String": "",
"Systeminfo_Network_Server_Version": "",
"Systeminfo_Services": "",
"Systeminfo_Services_Description": "",
"Systeminfo_Services_Name": "",
"Systeminfo_Storage": "",
"Systeminfo_Storage_Device": "",
"Systeminfo_Storage_Mount": "",
"Systeminfo_Storage_Size": "",
"Systeminfo_Storage_Type": "",
"Systeminfo_Storage_Usage": "",
"Systeminfo_Storage_Usage_Free": "",
"Systeminfo_Storage_Usage_Mount": "",
"Systeminfo_Storage_Usage_Total": "",
"Systeminfo_Storage_Usage_Used": "",
"Systeminfo_System": "",
"Systeminfo_System_AVG": "",
"Systeminfo_System_Architecture": "",
"Systeminfo_System_Kernel": "",
"Systeminfo_System_OSVersion": "",
"Systeminfo_System_Running_Processes": "",
"Systeminfo_System_System": "",
"Systeminfo_System_Uname": "",
"Systeminfo_System_Uptime": "",
"Systeminfo_This_Client": "",
"Systeminfo_USB_Devices": "",
"TICKER_MIGRATE_TO_NETALERTX": "",
"TIMEZONE_description": "",
"TIMEZONE_name": "",
"UI_DEV_SECTIONS_description": "",
"UI_DEV_SECTIONS_name": "",
"UI_ICONS_description": "",
"UI_ICONS_name": "",
"UI_LANG_description": "",
"UI_LANG_name": "",
"UI_MY_DEVICES_description": "",
"UI_MY_DEVICES_name": "",
"UI_NOT_RANDOM_MAC_description": "",
"UI_NOT_RANDOM_MAC_name": "",
"UI_PRESENCE_description": "",
"UI_PRESENCE_name": "",
"UI_REFRESH_description": "",
"UI_REFRESH_name": "",
"VERSION_description": "",
"VERSION_name": "",
"WF_Action_Add": "",
"WF_Action_field": "",
"WF_Action_type": "",
"WF_Action_value": "",
"WF_Actions": "",
"WF_Add": "",
"WF_Add_Condition": "",
"WF_Add_Group": "",
"WF_Condition_field": "",
"WF_Condition_operator": "",
"WF_Condition_value": "",
"WF_Conditions": "",
"WF_Conditions_logic_rules": "",
"WF_Duplicate": "",
"WF_Enabled": "",
"WF_Export": "",
"WF_Export_Copy": "",
"WF_Import": "",
"WF_Import_Copy": "",
"WF_Name": "",
"WF_Remove": "",
"WF_Remove_Copy": "",
"WF_Save": "",
"WF_Trigger": "",
"WF_Trigger_event_type": "",
"WF_Trigger_type": "",
"add_icon_event_tooltip": "",
"add_option_event_tooltip": "",
"copy_icons_event_tooltip": "",
"devices_old": "",
"general_event_description": "",
"general_event_title": "",
"go_to_device_event_tooltip": "",
"go_to_node_event_tooltip": "",
"new_version_available": "",
"report_guid": "",
"report_guid_missing": "",
"report_select_format": "",
"report_time": "",
"run_event_tooltip": "",
"select_icon_event_tooltip": "",
"settings_core_icon": "",
"settings_core_label": "",
"settings_device_scanners": "",
"settings_device_scanners_icon": "",
"settings_device_scanners_info": "",
"settings_device_scanners_label": "",
"settings_enabled": "",
"settings_enabled_icon": "",
"settings_expand_all": "",
"settings_imported": "",
"settings_imported_label": "",
"settings_missing": "",
"settings_missing_block": "",
"settings_old": "",
"settings_other_scanners": "",
"settings_other_scanners_icon": "",
"settings_other_scanners_label": "",
"settings_publishers": "",
"settings_publishers_icon": "",
"settings_publishers_info": "",
"settings_publishers_label": "",
"settings_readonly": "",
"settings_saved": "",
"settings_system_icon": "",
"settings_system_label": "",
"settings_update_item_warning": "",
"test_event_tooltip": ""
}

View File

@@ -5,7 +5,7 @@
// ###################################
$defaultLang = "en_us";
$allLanguages = [ "ar_ar", "ca_ca", "cs_cz", "de_de", "en_us", "es_es", "fa_fa", "fr_fr", "it_it", "nb_no", "pl_pl", "pt_br", "pt_pt", "ru_ru", "sv_sv", "tr_tr", "uk_ua", "zh_cn"];
$allLanguages = [ "ar_ar", "ca_ca", "cs_cz", "de_de", "en_us", "es_es", "fa_fa", "fr_fr", "it_it", "ja_jp", "nb_no", "pl_pl", "pt_br", "pt_pt", "ru_ru", "sv_sv", "tr_tr", "uk_ua", "zh_cn"];
global $db;
@@ -23,6 +23,7 @@ switch($result){
case 'Farsi (fa_fa)': $pia_lang_selected = 'fa_fa'; break;
case 'French (fr_fr)': $pia_lang_selected = 'fr_fr'; break;
case 'Italian (it_it)': $pia_lang_selected = 'it_it'; break;
case 'Japanese (ja_jp)': $pia_lang_selected = 'ja_jp'; break;
case 'Norwegian (nb_no)': $pia_lang_selected = 'nb_no'; break;
case 'Polish (pl_pl)': $pia_lang_selected = 'pl_pl'; break;
case 'Portuguese (pt_br)': $pia_lang_selected = 'pt_br'; break;

View File

@@ -1,6 +1,6 @@
import json
import os
import sys
def merge_translations(main_file, other_files):
# Load main file
@@ -30,10 +30,14 @@ def merge_translations(main_file, other_files):
json.dump(data, f, indent=4, ensure_ascii=False)
f.truncate()
if __name__ == "__main__":
current_path = os.path.dirname(os.path.abspath(__file__))
# language codes can be found here: http://www.lingoes.net/en/translator/langcode.htm
# "en_us.json" has to be first!
json_files = [ "en_us.json", "ar_ar.json", "ca_ca.json", "cs_cz.json", "de_de.json", "es_es.json", "fa_fa.json", "fr_fr.json", "it_it.json", "nb_no.json", "pl_pl.json", "pt_br.json", "pt_pt.json", "ru_ru.json", "sv_sv.json", "tr_tr.json", "uk_ua.json", "zh_cn.json"]
# "en_us.json" has to be first!
json_files = ["en_us.json", "ar_ar.json", "ca_ca.json", "cs_cz.json", "de_de.json",
"es_es.json", "fa_fa.json", "fr_fr.json", "it_it.json", "ja_jp.json",
"nb_no.json", "pl_pl.json", "pt_br.json", "pt_pt.json", "ru_ru.json",
"sv_sv.json", "tr_tr.json", "uk_ua.json", "zh_cn.json"]
file_paths = [os.path.join(current_path, file) for file in json_files]
merge_translations(file_paths[0], file_paths[1:])

View File

@@ -8,12 +8,12 @@ from pytz import timezone
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from const import logPath
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath # noqa: E402, E261 [flake8 lint suppression]
from plugin_helper import Plugin_Objects # noqa: E402, E261 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402, E261 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402, E261 [flake8 lint suppression]
import conf
import conf # noqa: E402, E261 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -32,7 +32,6 @@ RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
plugin_objects = Plugin_Objects(RESULT_FILE)
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -69,7 +68,7 @@ def main():
# helpVal2 = "Something1", # If you need to use even only 1, add the remaining ones too
# helpVal3 = "Something1", # and set them to 'null'. Check the the docs for details:
# helpVal4 = "Something1", # https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS_DEV.md
)
)
mylog('verbose', [f'[{pluginName}] New entries: "{len(device_data)}"'])
@@ -78,6 +77,7 @@ def main():
return 0
# retrieve data
def get_device_data(some_setting):
@@ -116,5 +116,6 @@ def get_device_data(some_setting):
# Return the data to be detected by the main application
return device_data
if __name__ == '__main__':
main()

View File

@@ -11,10 +11,10 @@ INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
# NetAlertX modules
from const import logPath
from plugin_helper import Plugin_Objects
from logger import mylog
from helper import get_setting_value
from const import logPath # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
pluginName = 'TESTONLY'
@@ -28,10 +28,7 @@ plugin_objects = Plugin_Objects(RESULT_FILE)
md5_hash = hashlib.md5()
# globals
def main():
# START
mylog('verbose', [f'[{pluginName}] In script'])
@@ -43,7 +40,6 @@ def main():
# result = cleanDeviceName(str, True)
regexes = get_setting_value('NEWDEV_NAME_CLEANUP_REGEX')
print(regexes)
subnets = get_setting_value('SCAN_SUBNETS')
@@ -57,16 +53,12 @@ def main():
mylog('trace', ["[cleanDeviceName] name after regex : " + str])
mylog('debug', ["[cleanDeviceName] output: " + str])
# SPACE FOR TESTING 🔼
# END
mylog('verbose', [f'[{pluginName}] result "{str}"'])
# -------------INIT---------------------
if __name__ == '__main__':
sys.exit(main())

View File

@@ -9,15 +9,15 @@ import sys
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
import conf
from const import confFileName, logPath
from utils.datetime_utils import timeNowDB
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from helper import timeNowTZ, get_setting_value
from models.notification_instance import NotificationInstance
from database import DB
from pytz import timezone
import conf # noqa: E402 [flake8 lint suppression]
from const import confFileName, logPath # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from models.notification_instance import NotificationInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value("TIMEZONE"))
@@ -35,13 +35,8 @@ def main():
mylog("verbose", [f"[{pluginName}](publisher) In script"])
# Check if basic config settings supplied
if check_config() == False:
mylog(
"none",
[
f"[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables."
],
)
if check_config() is False:
mylog("none", f"[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables.")
return
# Create a database connection
@@ -80,8 +75,7 @@ def main():
# -------------------------------------------------------------------------------
def check_config():
if get_setting_value("APPRISE_HOST") == "" or (
get_setting_value("APPRISE_URL") == ""
and get_setting_value("APPRISE_TAG") == ""
get_setting_value("APPRISE_URL") == "" and get_setting_value("APPRISE_TAG") == ""
):
return False
else:

View File

@@ -16,15 +16,15 @@ INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
# NetAlertX modules
import conf
from const import confFileName, logPath
from plugin_helper import Plugin_Objects
from utils.datetime_utils import timeNowDB
from logger import mylog, Logger
from helper import timeNowTZ, get_setting_value, hide_email
from models.notification_instance import NotificationInstance
from database import DB
from pytz import timezone
import conf # noqa: E402 [flake8 lint suppression]
from const import confFileName, logPath # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value, hide_email # noqa: E402 [flake8 lint suppression]
from models.notification_instance import NotificationInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -38,13 +38,12 @@ LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}](publisher) In script'])
# Check if basic config settings supplied
if check_config() == False:
if check_config() is False:
mylog('none', [f'[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables.'])
return
@@ -72,7 +71,6 @@ def main():
# mylog('verbose', [f'[{pluginName}] SMTP_REPORT_TO: ', get_setting_value("SMTP_REPORT_TO")])
# mylog('verbose', [f'[{pluginName}] SMTP_REPORT_FROM: ', get_setting_value("SMTP_REPORT_FROM")])
# Process the new notifications (see the Notifications DB table for structure or check the /php/server/query_json.php?file=table_notifications.json endpoint)
for notification in new_notifications:
@@ -93,8 +91,9 @@ def main():
plugin_objects.write_result_file()
#-------------------------------------------------------------------------------
def check_config ():
# -------------------------------------------------------------------------------
def check_config():
server = get_setting_value('SMTP_SERVER')
report_to = get_setting_value("SMTP_REPORT_TO")
@@ -106,12 +105,19 @@ def check_config ():
else:
return True
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def send(pHTML, pText):
mylog('debug', [f'[{pluginName}] SMTP_REPORT_TO: {hide_email(str(get_setting_value("SMTP_REPORT_TO")))} SMTP_USER: {hide_email(str(get_setting_value("SMTP_USER")))}'])
subject, from_email, to_email, message_html, message_text = sanitize_email_content(str(get_setting_value("SMTP_SUBJECT")), get_setting_value("SMTP_REPORT_FROM"), get_setting_value("SMTP_REPORT_TO"), pHTML, pText)
subject, from_email, to_email, message_html, message_text = sanitize_email_content(
str(get_setting_value("SMTP_SUBJECT")),
get_setting_value("SMTP_REPORT_FROM"),
get_setting_value("SMTP_REPORT_TO"),
pHTML,
pText
)
emails = []
@@ -134,8 +140,8 @@ def send(pHTML, pText):
msg['To'] = mail_addr
msg['Date'] = formatdate(localtime=True)
msg.attach (MIMEText (message_text, 'plain'))
msg.attach (MIMEText (message_html, 'html'))
msg.attach(MIMEText(message_text, 'plain'))
msg.attach(MIMEText(message_html, 'html'))
# Set a timeout for the SMTP connection (in seconds)
smtp_timeout = 30
@@ -144,12 +150,12 @@ def send(pHTML, pText):
if get_setting_value("LOG_LEVEL") == 'debug':
send_email(msg,smtp_timeout)
send_email(msg, smtp_timeout)
else:
try:
send_email(msg,smtp_timeout)
send_email(msg, smtp_timeout)
except smtplib.SMTPAuthenticationError as e:
mylog('none', [' ERROR: Couldn\'t connect to the SMTP server (SMTPAuthenticationError)'])
@@ -166,8 +172,9 @@ def send(pHTML, pText):
mylog('none', [' ERROR: Are you sure you need SMTP_FORCE_SSL enabled? Check your SMTP provider docs.'])
mylog('none', [' ERROR: ', str(e)])
# ----------------------------------------------------------------------------------
def send_email(msg,smtp_timeout):
def send_email(msg, smtp_timeout):
# Send mail
if get_setting_value('SMTP_FORCE_SSL'):
mylog('debug', ['SMTP_FORCE_SSL == True so using .SMTP_SSL()'])
@@ -182,10 +189,10 @@ def send_email(msg,smtp_timeout):
mylog('debug', ['SMTP_FORCE_SSL == False so using .SMTP()'])
if get_setting_value("SMTP_PORT") == 0:
mylog('debug', ['SMTP_PORT == 0 so sending .SMTP(SMTP_SERVER)'])
smtp_connection = smtplib.SMTP (get_setting_value('SMTP_SERVER'))
smtp_connection = smtplib.SMTP(get_setting_value('SMTP_SERVER'))
else:
mylog('debug', ['SMTP_PORT == 0 so sending .SMTP(SMTP_SERVER, SMTP_PORT)'])
smtp_connection = smtplib.SMTP (get_setting_value('SMTP_SERVER'), get_setting_value('SMTP_PORT'))
smtp_connection = smtplib.SMTP(get_setting_value('SMTP_SERVER'), get_setting_value('SMTP_PORT'))
mylog('debug', ['Setting SMTP debug level'])
@@ -193,7 +200,7 @@ def send_email(msg,smtp_timeout):
if get_setting_value('LOG_LEVEL') == 'debug':
smtp_connection.set_debuglevel(1)
mylog('debug', [ 'Sending .ehlo()'])
mylog('debug', ['Sending .ehlo()'])
smtp_connection.ehlo()
if not get_setting_value('SMTP_SKIP_TLS'):
@@ -203,12 +210,13 @@ def send_email(msg,smtp_timeout):
smtp_connection.ehlo()
if not get_setting_value('SMTP_SKIP_LOGIN'):
mylog('debug', ['SMTP_SKIP_LOGIN == False so sending .login()'])
smtp_connection.login (get_setting_value('SMTP_USER'), get_setting_value('SMTP_PASS'))
smtp_connection.login(get_setting_value('SMTP_USER'), get_setting_value('SMTP_PASS'))
mylog('debug', ['Sending .sendmail()'])
smtp_connection.sendmail (get_setting_value("SMTP_REPORT_FROM"), get_setting_value("SMTP_REPORT_TO"), msg.as_string())
smtp_connection.sendmail(get_setting_value("SMTP_REPORT_FROM"), get_setting_value("SMTP_REPORT_TO"), msg.as_string())
smtp_connection.quit()
# ----------------------------------------------------------------------------------
def sanitize_email_content(subject, from_email, to_email, message_html, message_text):
# Validate and sanitize subject
@@ -229,6 +237,7 @@ def sanitize_email_content(subject, from_email, to_email, message_html, message_
return subject, from_email, to_email, message_html, message_text
# ----------------------------------------------------------------------------------
if __name__ == '__main__':
sys.exit(main())

View File

@@ -18,15 +18,14 @@ INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
# NetAlertX modules
import conf
from const import confFileName, logPath
from utils.plugin_utils import getPluginObject
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
import conf # noqa: E402 [flake8 lint suppression]
from const import confFileName, logPath # noqa: E402 [flake8 lint suppression]
from utils.plugin_utils import getPluginObject # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value, bytes_to_string, \
sanitize_string, normalize_string
from utils.datetime_utils import timeNowDB
from database import DB, get_device_stats
sanitize_string, normalize_string # noqa: E402 [flake8 lint suppression]
from database import DB, get_device_stats # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
@@ -234,7 +233,6 @@ class sensor_config:
Store the sensor configuration in the global plugin_objects, which tracks sensors based on a unique combination
of attributes including deviceId, sensorName, hash, and MAC.
"""
global plugin_objects
# Add the sensor to the global plugin_objects
plugin_objects.add_object(
@@ -287,11 +285,11 @@ def publish_mqtt(mqtt_client, topic, message):
# mylog('verbose', [f"[{pluginName}] mqtt_client.is_connected(): {mqtt_client.is_connected()} "])
result = mqtt_client.publish(
topic=topic,
payload=message,
qos=qos,
retain=True,
)
topic=topic,
payload=message,
qos=qos,
retain=True,
)
status = result[0]
@@ -303,6 +301,7 @@ def publish_mqtt(mqtt_client, topic, message):
time.sleep(0.1)
return True
# ------------------------------------------------------------------------------
# Create a generic device for overal stats
def create_generic_device(mqtt_client, deviceId, deviceName):
@@ -318,7 +317,6 @@ def create_generic_device(mqtt_client, deviceId, deviceName):
# ------------------------------------------------------------------------------
# Register sensor config on the broker
def create_sensor(mqtt_client, deviceId, deviceName, sensorType, sensorName, icon, mac=""):
global mqtt_sensors
# check previous configs
sensorConfig = sensor_config(deviceId, deviceName, sensorType, sensorName, icon, mac)
@@ -429,12 +427,11 @@ def mqtt_create_client():
# -----------------------------------------------------------------------------
def mqtt_start(db):
global mqtt_client, mqtt_connected_to_broker
global mqtt_client
if not mqtt_connected_to_broker:
mqtt_client = mqtt_create_client()
deviceName = get_setting_value('MQTT_DEVICE_NAME')
deviceId = get_setting_value('MQTT_DEVICE_ID')
@@ -449,16 +446,18 @@ def mqtt_start(db):
row = get_device_stats(db)
# Publish (wrap into {} and remove last ',' from above)
publish_mqtt(mqtt_client, f"{topic_root}/sensor/{deviceId}/state",
{
"online": row[0],
"down": row[1],
"all": row[2],
"archived": row[3],
"new": row[4],
"unknown": row[5]
}
)
publish_mqtt(
mqtt_client,
f"{topic_root}/sensor/{deviceId}/state",
{
"online": row[0],
"down": row[1],
"all": row[2],
"archived": row[3],
"new": row[4],
"unknown": row[5]
}
)
# Generate device-specific MQTT messages if enabled
if get_setting_value('MQTT_SEND_DEVICES'):
@@ -466,11 +465,11 @@ def mqtt_start(db):
# Specific devices processing
# Get all devices
devices = db.read(get_setting_value('MQTT_DEVICES_SQL').replace('{s-quote}',"'"))
devices = db.read(get_setting_value('MQTT_DEVICES_SQL').replace('{s-quote}', "'"))
sec_delay = len(devices) * int(get_setting_value('MQTT_DELAY_SEC'))*5
sec_delay = len(devices) * int(get_setting_value('MQTT_DELAY_SEC')) * 5
mylog('verbose', [f"[{pluginName}] Estimated delay: ", (sec_delay), 's ', '(', round(sec_delay/60, 1), 'min)'])
mylog('verbose', [f"[{pluginName}] Estimated delay: ", (sec_delay), 's ', '(', round(sec_delay / 60, 1), 'min)'])
for device in devices:
@@ -495,27 +494,29 @@ def mqtt_start(db):
# handle device_tracker
# IMPORTANT: shared payload - device_tracker attributes and individual sensors
devJson = {
"last_ip": device["devLastIP"],
"is_new": str(device["devIsNew"]),
"alert_down": str(device["devAlertDown"]),
"vendor": sanitize_string(device["devVendor"]),
"mac_address": str(device["devMac"]),
"model": devDisplayName,
"last_connection": prepTimeStamp(str(device["devLastConnection"])),
"first_connection": prepTimeStamp(str(device["devFirstConnection"])),
"sync_node": device["devSyncHubNode"],
"group": device["devGroup"],
"location": device["devLocation"],
"network_parent_mac": device["devParentMAC"],
"network_parent_name": next((dev["devName"] for dev in devices if dev["devMAC"] == device["devParentMAC"]), "")
}
"last_ip": device["devLastIP"],
"is_new": str(device["devIsNew"]),
"alert_down": str(device["devAlertDown"]),
"vendor": sanitize_string(device["devVendor"]),
"mac_address": str(device["devMac"]),
"model": devDisplayName,
"last_connection": prepTimeStamp(str(device["devLastConnection"])),
"first_connection": prepTimeStamp(str(device["devFirstConnection"])),
"sync_node": device["devSyncHubNode"],
"group": device["devGroup"],
"location": device["devLocation"],
"network_parent_mac": device["devParentMAC"],
"network_parent_name": next((dev["devName"] for dev in devices if dev["devMAC"] == device["devParentMAC"]), "")
}
# bulk update device sensors in home assistant
publish_mqtt(mqtt_client, sensorConfig.state_topic, devJson) # REQUIRED, DON'T DELETE
# create and update is_present sensor
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'binary_sensor', 'is_present', 'wifi', device["devMac"])
publish_mqtt(mqtt_client, sensorConfig.state_topic,
publish_mqtt(
mqtt_client,
sensorConfig.state_topic,
{
"is_present": to_binary_sensor(str(device["devPresentLastScan"]))
}

View File

@@ -1,4 +1,3 @@
#!/usr/bin/env python
import json
@@ -11,15 +10,15 @@ from base64 import b64encode
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
import conf
from const import confFileName, logPath
from plugin_helper import Plugin_Objects, handleEmpty
from utils.datetime_utils import timeNowDB
from logger import mylog, Logger
from helper import get_setting_value
from models.notification_instance import NotificationInstance
from database import DB
from pytz import timezone
import conf # noqa: E402 [flake8 lint suppression]
from const import confFileName, logPath # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects, handleEmpty # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from models.notification_instance import NotificationInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -33,13 +32,12 @@ LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}](publisher) In script'])
# Check if basic config settings supplied
if check_config() == False:
if check_config() is False:
mylog('none', [f'[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables.'])
return
@@ -77,15 +75,15 @@ def main():
plugin_objects.write_result_file()
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def check_config():
if get_setting_value('NTFY_HOST') == '' or get_setting_value('NTFY_TOPIC') == '':
return False
else:
return True
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def send(html, text):
response_text = ''
@@ -100,7 +98,7 @@ def send(html, text):
# prepare request headers
headers = {
"Title": "NetAlertX Notification",
"Actions": "view, Open Dashboard, "+ get_setting_value('REPORT_DASHBOARD_URL'),
"Actions": "view, Open Dashboard, " + get_setting_value('REPORT_DASHBOARD_URL'),
"Priority": get_setting_value('NTFY_PRIORITY'),
"Tags": "warning"
}
@@ -109,18 +107,21 @@ def send(html, text):
if token != '':
headers["Authorization"] = "Bearer {}".format(token)
elif user != "" and pwd != "":
# Generate hash for basic auth
# Generate hash for basic auth
basichash = b64encode(bytes(user + ':' + pwd, "utf-8")).decode("ascii")
# add authorization header with hash
# add authorization header with hash
headers["Authorization"] = "Basic {}".format(basichash)
# call NTFY service
try:
response = requests.post("{}/{}".format( get_setting_value('NTFY_HOST'),
get_setting_value('NTFY_TOPIC')),
data = text,
headers = headers,
verify = verify_ssl)
response = requests.post("{}/{}".format(
get_setting_value('NTFY_HOST'),
get_setting_value('NTFY_TOPIC')),
data = text,
headers = headers,
verify = verify_ssl,
timeout = get_setting_value('NTFY_RUN_TIMEOUT')
)
response_status_code = response.status_code
@@ -142,4 +143,3 @@ def send(html, text):
if __name__ == '__main__':
sys.exit(main())

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python3
import conf
from const import confFileName, logPath
from const import logPath
from pytz import timezone
import os
@@ -12,12 +12,12 @@ import requests
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, handleEmpty # noqa: E402
from logger import mylog, Logger # noqa: E402
from helper import get_setting_value, hide_string # noqa: E402
from utils.datetime_utils import timeNowDB
from models.notification_instance import NotificationInstance # noqa: E402
from database import DB # noqa: E402
from plugin_helper import Plugin_Objects, handleEmpty # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value, hide_string # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from models.notification_instance import NotificationInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value("TIMEZONE"))
@@ -36,11 +36,7 @@ def main():
# Check if basic config settings supplied
if not validate_config():
mylog(
"none",
f"[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. "
f"Check your {confFileName} {pluginName}_* variables.",
)
mylog("none", f"[{pluginName}] ⚠ ERROR: Publisher not set up correctly. Check your {pluginName}_* variables.",)
return
# Create a database connection

View File

@@ -1,6 +1,4 @@
#!/usr/bin/env python
import json
import os
import sys
@@ -10,15 +8,15 @@ import requests
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
import conf
from const import confFileName, logPath
from plugin_helper import Plugin_Objects, handleEmpty
from logger import mylog, Logger
from helper import get_setting_value, hide_string
from utils.datetime_utils import timeNowDB
from models.notification_instance import NotificationInstance
from database import DB
from pytz import timezone
import conf # noqa: E402 [flake8 lint suppression]
from const import confFileName, logPath # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects, handleEmpty # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value, hide_string # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from models.notification_instance import NotificationInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -32,13 +30,12 @@ LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}](publisher) In script'])
# Check if basic config settings supplied
if check_config() == False:
if check_config() is False:
mylog('none', [f'[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables.'])
return
@@ -76,8 +73,7 @@ def main():
plugin_objects.write_result_file()
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def send(text):
response_text = ''
@@ -87,7 +83,6 @@ def send(text):
mylog('verbose', [f'[{pluginName}] PUSHSAFER_TOKEN: "{hide_string(token)}"'])
try:
url = 'https://www.pushsafer.com/api'
post_fields = {
@@ -101,12 +96,10 @@ def send(text):
"u" : get_setting_value('REPORT_DASHBOARD_URL'),
"ut" : 'Open NetAlertX',
"k" : token,
}
response = requests.post(url, data=post_fields)
}
response = requests.post(url, data=post_fields, timeout=get_setting_value("PUSHSAFER_RUN_TIMEOUT"))
response_status_code = response.status_code
# Check if the request was successful (status code 200)
if response_status_code == 200:
response_text = response.text # This captures the response body/message
@@ -120,21 +113,17 @@ def send(text):
return response_text, response_status_code
return response_text, response_status_code
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def check_config():
if get_setting_value('PUSHSAFER_TOKEN') == 'ApiKey':
return False
else:
return True
if get_setting_value('PUSHSAFER_TOKEN') == 'ApiKey':
return False
else:
return True
# -------------------------------------------------------
if __name__ == '__main__':
sys.exit(main())

View File

@@ -8,15 +8,15 @@ import sys
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
import conf
from const import confFileName, logPath
from plugin_helper import Plugin_Objects
from utils.datetime_utils import timeNowDB
from logger import mylog, Logger
from helper import get_setting_value
from models.notification_instance import NotificationInstance
from database import DB
from pytz import timezone
import conf # noqa: E402 [flake8 lint suppression]
from const import confFileName, logPath # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from models.notification_instance import NotificationInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -30,13 +30,11 @@ LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}](publisher) In script'])
# Check if basic config settings supplied
if check_config() == False:
if check_config() is False:
mylog('none', [
f'[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables.'])
return

View File

@@ -1,4 +1,3 @@
#!/usr/bin/env python
import json
@@ -13,15 +12,15 @@ INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
import conf
from const import logPath, confFileName
from plugin_helper import Plugin_Objects, handleEmpty
from utils.datetime_utils import timeNowDB
from logger import mylog, Logger
from helper import get_setting_value, write_file
from models.notification_instance import NotificationInstance
from database import DB
from pytz import timezone
import conf # noqa: E402 [flake8 lint suppression]
from const import logPath, confFileName # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects, handleEmpty # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value, write_file # noqa: E402 [flake8 lint suppression]
from models.notification_instance import NotificationInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -35,13 +34,12 @@ LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}](publisher) In script'])
# Check if basic config settings supplied
if check_config() == False:
if check_config() is False:
mylog('none', [f'[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables.'])
return
@@ -62,7 +60,11 @@ def main():
for notification in new_notifications:
# Send notification
response_stdout, response_stderr = send(notification["Text"], notification["HTML"], notification["JSON"])
response_stdout, response_stderr = send(
notification["Text"],
notification["HTML"],
notification["JSON"]
)
# Log result
plugin_objects.add_object(
@@ -79,16 +81,16 @@ def main():
plugin_objects.write_result_file()
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def check_config():
if get_setting_value('WEBHOOK_URL') == '':
return False
else:
return True
if get_setting_value('WEBHOOK_URL') == '':
return False
else:
return True
#-------------------------------------------------------------------------------
def send (text_data, html_data, json_data):
# -------------------------------------------------------------------------------
def send(text_data, html_data, json_data):
response_stderr = ''
response_stdout = ''
@@ -139,33 +141,36 @@ def send (text_data, html_data, json_data):
payloadData = text_data
# Define slack-compatible payload
_json_payload = { "text": payloadData } if payloadType == 'text' else {
"username": "NetAlertX",
"text": "There are new notifications",
"attachments": [{
"title": "NetAlertX Notifications",
"title_link": get_setting_value('REPORT_DASHBOARD_URL'),
"text": payloadData
}]
}
if payloadType == 'text':
_json_payload = {"text": payloadData}
else:
_json_payload = {
"username": "NetAlertX",
"text": "There are new notifications",
"attachments": [{
"title": "NetAlertX Notifications",
"title_link": get_setting_value('REPORT_DASHBOARD_URL'),
"text": payloadData
}]
}
# DEBUG - Write the json payload into a log file for debugging
write_file (logPath + '/webhook_payload.json', json.dumps(_json_payload))
write_file(logPath + '/webhook_payload.json', json.dumps(_json_payload))
# Using the Slack-Compatible Webhook endpoint for Discord so that the same payload can be used for both
# Consider: curl has the ability to load in data to POST from a file + piping
if(endpointUrl.startswith('https://discord.com/api/webhooks/') and not endpointUrl.endswith("/slack")):
if (endpointUrl.startswith('https://discord.com/api/webhooks/') and not endpointUrl.endswith("/slack")):
_WEBHOOK_URL = f"{endpointUrl}/slack"
curlParams = ["curl","-i","-H", "Content-Type:application/json" ,"-d", json.dumps(_json_payload), _WEBHOOK_URL]
curlParams = ["curl", "-i", "-H", "Content-Type:application/json", "-d", json.dumps(_json_payload), _WEBHOOK_URL]
else:
_WEBHOOK_URL = endpointUrl
curlParams = ["curl","-i","-X", requestMethod , "-H", "Content-Type:application/json", "-d", json.dumps(_json_payload), _WEBHOOK_URL]
curlParams = ["curl", "-i", "-X", requestMethod , "-H", "Content-Type:application/json", "-d", json.dumps(_json_payload), _WEBHOOK_URL]
# Add HMAC signature if configured
if(secret != ''):
if (secret != ''):
h = hmac.new(secret.encode("UTF-8"), json.dumps(_json_payload, separators=(',', ':')).encode(), hashlib.sha256).hexdigest()
curlParams.insert(4,"-H")
curlParams.insert(5,f"X-Webhook-Signature: sha256={h}")
curlParams.insert(4, "-H")
curlParams.insert(5, f"X-Webhook-Signature: sha256={h}")
try:
# Execute CURL call
@@ -179,18 +184,15 @@ def send (text_data, html_data, json_data):
mylog('debug', [f'[{pluginName}] stdout: ', response_stdout])
mylog('debug', [f'[{pluginName}] stderr: ', response_stderr])
except subprocess.CalledProcessError as e:
# An error occurred, handle it
mylog('none', [f'[{pluginName}] ⚠ ERROR: ', e.output])
response_stderr = e.output
return response_stdout, response_stderr
# -------------------------------------------------------
# -------------------------------------------------------
if __name__ == '__main__':
sys.exit(main())

View File

@@ -1,7 +1,6 @@
#!/usr/bin/env python
import os
import time
import pathlib
import argparse
import sys
import re
@@ -9,16 +8,16 @@ import base64
import subprocess
# Register NetAlertX directories
INSTALL_PATH="/app"
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from database import DB
from plugin_helper import Plugin_Objects, handleEmpty
from logger import mylog, Logger, append_line_to_file
from helper import get_setting_value
from const import logPath, applicationPath
import conf
from pytz import timezone
from database import DB # noqa: E402 [flake8 lint suppression]
from plugin_helper import Plugin_Objects, handleEmpty # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value("TIMEZONE"))
@@ -139,10 +138,7 @@ def execute_arpscan(userSubnets):
mylog("verbose", [f"[{pluginName}] All devices List len:", len(devices_list)])
mylog("verbose", [f"[{pluginName}] Devices List:", devices_list])
mylog(
"verbose",
[f"[{pluginName}] Found: Devices without duplicates ", len(unique_devices)],
)
mylog("verbose", [f"[{pluginName}] Found: Devices without duplicates ", len(unique_devices)],)
return unique_devices
@@ -175,10 +171,7 @@ def execute_arpscan_on_interface(interface):
except subprocess.CalledProcessError:
result = ""
except subprocess.TimeoutExpired:
mylog(
"warning",
[f"[{pluginName}] arp-scan timed out after {timeout_seconds}s"],
)
mylog("warning", [f"[{pluginName}] arp-scan timed out after {timeout_seconds}s"],)
result = ""
# stop looping if duration not set or expired
if scan_duration == 0 or (time.time() - start_time) > scan_duration:

View File

@@ -6,17 +6,16 @@ INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
pluginName = "ASUSWRT"
import asyncio
import aiohttp
import conf
from asusrouter import AsusData, AsusRouter
from asusrouter.modules.connection import ConnectionState
from const import logPath
from helper import get_setting_value
from logger import Logger, mylog
from plugin_helper import (Plugin_Objects, handleEmpty)
from pytz import timezone
import asyncio # noqa: E402 [flake8 lint suppression]
import aiohttp # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from asusrouter import AsusData, AsusRouter # noqa: E402 [flake8 lint suppression]
from asusrouter.modules.connection import ConnectionState # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from logger import Logger, mylog # noqa: E402 [flake8 lint suppression]
from plugin_helper import (Plugin_Objects, handleEmpty) # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
conf.tz = timezone(get_setting_value("TIMEZONE"))
@@ -34,10 +33,7 @@ def main():
device_data = get_device_data()
mylog(
"verbose",
[f"[{pluginName}] Found '{len(device_data)}' devices"],
)
mylog("verbose", f"[{pluginName}] Found '{len(device_data)}' devices")
filtered_devices = [
(key, device)
@@ -45,10 +41,7 @@ def main():
if device.state == ConnectionState.CONNECTED
]
mylog(
"verbose",
[f"[{pluginName}] Processing '{len(filtered_devices)}' connected devices"],
)
mylog("verbose", f"[{pluginName}] Processing '{len(filtered_devices)}' connected devices")
for mac, device in filtered_devices:
entry_mac = str(device.description.mac).lower()

View File

@@ -8,14 +8,14 @@ from zeroconf import Zeroconf
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
from database import DB
from models.device_instance import DeviceInstance
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Configure timezone and logging
conf.tz = timezone(get_setting_value("TIMEZONE"))
@@ -67,7 +67,7 @@ def resolve_mdns_name(ip: str, timeout: int = 5) -> str:
hostname = socket.getnameinfo((ip, 0), socket.NI_NAMEREQD)[0]
zeroconf.close()
if hostname and hostname != ip:
mylog("debug", [f"[{pluginName}] Found mDNS name: {hostname}"])
mylog("debug", [f"[{pluginName}] Found mDNS name (rev_name): {hostname} ({rev_name})"])
return hostname
except Exception as e:
mylog("debug", [f"[{pluginName}] Zeroconf lookup failed for {ip}: {e}"])

View File

@@ -11,11 +11,11 @@ from datetime import datetime
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath, fullDbPath
import conf
from pytz import timezone
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath, fullDbPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -29,6 +29,7 @@ LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
# the script expects a parameter in the format of devices=device1,device2,...
@@ -72,7 +73,7 @@ def main():
csv_writer = csv.writer(csvfile, delimiter=',', quoting=csv.QUOTE_MINIMAL)
# Wrap the header values in double quotes and write the header row
csv_writer.writerow([ '"' + col + '"' for col in columns])
csv_writer.writerow(['"' + col + '"' for col in columns])
# Fetch and write data rows
for row in cursor.fetchall():
@@ -96,8 +97,8 @@ def main():
return 0
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -8,11 +8,11 @@ import sqlite3
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath, fullDbPath
import conf
from pytz import timezone
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath, fullDbPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value("TIMEZONE"))
@@ -75,10 +75,7 @@ def cleanup_database(
# -----------------------------------------------------
# Cleanup Online History
mylog(
"verbose",
[f"[{pluginName}] Online_History: Delete all but keep latest 150 entries"],
)
mylog("verbose", [f"[{pluginName}] Online_History: Delete all but keep latest 150 entries"],)
cursor.execute(
"""DELETE from Online_History where "Index" not in (
SELECT "Index" from Online_History
@@ -87,24 +84,14 @@ def cleanup_database(
# -----------------------------------------------------
# Cleanup Events
mylog(
"verbose",
[
f"[{pluginName}] Events: Delete all older than {str(DAYS_TO_KEEP_EVENTS)} days (DAYS_TO_KEEP_EVENTS setting)"
],
)
mylog("verbose", f"[{pluginName}] Events: Delete all older than {str(DAYS_TO_KEEP_EVENTS)} days (DAYS_TO_KEEP_EVENTS setting)")
cursor.execute(
f"""DELETE FROM Events
WHERE eve_DateTime <= date('now', '-{str(DAYS_TO_KEEP_EVENTS)} day')"""
)
# -----------------------------------------------------
# Trim Plugins_History entries to less than PLUGINS_KEEP_HIST setting per unique "Plugin" column entry
mylog(
"verbose",
[
f"[{pluginName}] Plugins_History: Trim Plugins_History entries to less than {str(PLUGINS_KEEP_HIST)} per Plugin (PLUGINS_KEEP_HIST setting)"
],
)
mylog("verbose", f"[{pluginName}] Plugins_History: Trim Plugins_History entries to less than {str(PLUGINS_KEEP_HIST)} per Plugin (PLUGINS_KEEP_HIST setting)")
# Build the SQL query to delete entries that exceed the limit per unique "Plugin" column entry
delete_query = f"""DELETE FROM Plugins_History
@@ -125,12 +112,7 @@ def cleanup_database(
histCount = get_setting_value("DBCLNP_NOTIFI_HIST")
mylog(
"verbose",
[
f"[{pluginName}] Plugins_History: Trim Notifications entries to less than {histCount}"
],
)
mylog("verbose", f"[{pluginName}] Plugins_History: Trim Notifications entries to less than {histCount}")
# Build the SQL query to delete entries
delete_query = f"""DELETE FROM Notifications
@@ -170,12 +152,7 @@ def cleanup_database(
# -----------------------------------------------------
# Cleanup New Devices
if HRS_TO_KEEP_NEWDEV != 0:
mylog(
"verbose",
[
f"[{pluginName}] Devices: Delete all New Devices older than {str(HRS_TO_KEEP_NEWDEV)} hours (HRS_TO_KEEP_NEWDEV setting)"
],
)
mylog("verbose", f"[{pluginName}] Devices: Delete all New Devices older than {str(HRS_TO_KEEP_NEWDEV)} hours (HRS_TO_KEEP_NEWDEV setting)")
query = f"""DELETE FROM Devices WHERE devIsNew = 1 AND devFirstConnection < date('now', '-{str(HRS_TO_KEEP_NEWDEV)} hour')"""
mylog("verbose", [f"[{pluginName}] Query: {query} "])
cursor.execute(query)
@@ -183,12 +160,7 @@ def cleanup_database(
# -----------------------------------------------------
# Cleanup Offline Devices
if HRS_TO_KEEP_OFFDEV != 0:
mylog(
"verbose",
[
f"[{pluginName}] Devices: Delete all New Devices older than {str(HRS_TO_KEEP_OFFDEV)} hours (HRS_TO_KEEP_OFFDEV setting)"
],
)
mylog("verbose", f"[{pluginName}] Devices: Delete all New Devices older than {str(HRS_TO_KEEP_OFFDEV)} hours (HRS_TO_KEEP_OFFDEV setting)")
query = f"""DELETE FROM Devices WHERE devPresentLastScan = 0 AND devLastConnection < date('now', '-{str(HRS_TO_KEEP_OFFDEV)} hour')"""
mylog("verbose", [f"[{pluginName}] Query: {query} "])
cursor.execute(query)
@@ -196,12 +168,7 @@ def cleanup_database(
# -----------------------------------------------------
# Clear New Flag
if CLEAR_NEW_FLAG != 0:
mylog(
"verbose",
[
f'[{pluginName}] Devices: Clear "New Device" flag for all devices older than {str(CLEAR_NEW_FLAG)} hours (CLEAR_NEW_FLAG setting)'
],
)
mylog("verbose", f'[{pluginName}] Devices: Clear "New Device" flag for all devices older than {str(CLEAR_NEW_FLAG)} hours (CLEAR_NEW_FLAG setting)')
query = f"""UPDATE Devices SET devIsNew = 0 WHERE devIsNew = 1 AND date(devFirstConnection, '+{str(CLEAR_NEW_FLAG)} hour') < date('now')"""
# select * from Devices where devIsNew = 1 AND date(devFirstConnection, '+3 hour' ) < date('now')
mylog("verbose", [f"[{pluginName}] Query: {query} "])

View File

@@ -9,11 +9,11 @@ import subprocess
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from logger import mylog, Logger
from helper import get_setting_value, check_IP_format
from const import logPath
import conf
from pytz import timezone
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value, check_IP_format # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -28,7 +28,6 @@ LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -41,7 +40,6 @@ def main():
parser.add_argument('DDNS_PASSWORD', action="store", help="Password for Dynamic DNS (DDNS) authentication")
parser.add_argument('DDNS_DOMAIN', action="store", help="Dynamic DNS (DDNS) domain name")
values = parser.parse_args()
PREV_IP = values.prev_ip.split('=')[1]
@@ -51,17 +49,17 @@ def main():
DDNS_DOMAIN = values.DDNS_DOMAIN.split('=')[1]
# perform the new IP lookup and DDNS tasks if enabled
ddns_update( DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN, PREV_IP)
ddns_update(DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN, PREV_IP)
mylog('verbose', [f'[{pluginName}] Finished '])
return 0
#===============================================================================
# ===============================================================================
# INTERNET IP CHANGE
#===============================================================================
def ddns_update ( DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN, PREV_IP ):
# ===============================================================================
def ddns_update(DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN, PREV_IP):
# Update DDNS record if enabled and IP is different
# Get Dynamic DNS IP
@@ -78,11 +76,10 @@ def ddns_update ( DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN, PREV_I
# Check DNS Change
if dns_IP != PREV_IP :
mylog('none', [f'[{pluginName}] Updating Dynamic DNS IP'])
message = set_dynamic_DNS_IP (DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN)
message = set_dynamic_DNS_IP(DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN)
mylog('none', [f'[{pluginName}] ', message])
# plugin_objects = Plugin_Objects(RESULT_FILE)
# plugin_objects.add_object(
# primaryId = 'Internet', # MAC (Device Name)
# secondaryId = new_internet_IP, # IP Address
@@ -96,23 +93,23 @@ def ddns_update ( DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN, PREV_I
# plugin_objects.write_result_file()
#-------------------------------------------------------------------------------
def get_dynamic_DNS_IP (DDNS_DOMAIN):
# -------------------------------------------------------------------------------
def get_dynamic_DNS_IP(DDNS_DOMAIN):
# Using supplied DNS server
dig_args = ['dig', '+short', DDNS_DOMAIN]
try:
# try runnning a subprocess
dig_output = subprocess.check_output (dig_args, universal_newlines=True)
dig_output = subprocess.check_output(dig_args, universal_newlines=True)
mylog('none', [f'[{pluginName}] DIG output :', dig_output])
except subprocess.CalledProcessError as e:
# An error occured, handle it
mylog('none', [f'[{pluginName}] ⚠ ERROR - ', e.output])
dig_output = '' # probably no internet
dig_output = '' # probably no internet
# Check result is an IP
IP = check_IP_format (dig_output)
IP = check_IP_format(dig_output)
# Handle invalid response
if IP == '':
@@ -120,28 +117,27 @@ def get_dynamic_DNS_IP (DDNS_DOMAIN):
return IP
#-------------------------------------------------------------------------------
def set_dynamic_DNS_IP (DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN):
# -------------------------------------------------------------------------------
def set_dynamic_DNS_IP(DDNS_UPDATE_URL, DDNS_USER, DDNS_PASSWORD, DDNS_DOMAIN):
try:
# try runnning a subprocess
# Update Dynamic IP
curl_output = subprocess.check_output (['curl',
'-s',
DDNS_UPDATE_URL +
'username=' + DDNS_USER +
'&password=' + DDNS_PASSWORD +
'&hostname=' + DDNS_DOMAIN],
universal_newlines=True)
curl_output = subprocess.check_output([
'curl',
'-s',
DDNS_UPDATE_URL + 'username=' + DDNS_USER + '&password=' + DDNS_PASSWORD + '&hostname=' + DDNS_DOMAIN],
universal_newlines=True)
except subprocess.CalledProcessError as e:
# An error occured, handle it
mylog('none', [f'[{pluginName}] ⚠ ERROR - ',e.output])
mylog('none', [f'[{pluginName}] ⚠ ERROR - ', e.output])
curl_output = ""
return curl_output
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -10,13 +10,13 @@ import chardet
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, handleEmpty, is_mac
from logger import mylog, Logger
from dhcp_leases import DhcpLeases
from helper import get_setting_value
import conf
from const import logPath
from pytz import timezone
from plugin_helper import Plugin_Objects, handleEmpty, is_mac # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from dhcp_leases import DhcpLeases # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -24,15 +24,13 @@ conf.tz = timezone(get_setting_value('TIMEZONE'))
# Make sure log level is initialized correctly
Logger(get_setting_value('LOG_LEVEL'))
pluginName= 'DHCPLSS'
pluginName = 'DHCPLSS'
LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
# -------------------------------------------------------------
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -40,7 +38,12 @@ def main():
last_run_logfile.write("")
parser = argparse.ArgumentParser(description='Import devices from dhcp.leases files')
parser.add_argument('paths', action="store", help="absolute dhcp.leases file paths to check separated by ','")
parser.add_argument(
'paths',
action="store",
help="absolute dhcp.leases file paths to check separated by ','"
)
values = parser.parse_args()
plugin_objects = Plugin_Objects(RESULT_FILE)
@@ -52,6 +55,7 @@ def main():
plugin_objects.write_result_file()
# -------------------------------------------------------------
def get_entries(path, plugin_objects):
@@ -122,5 +126,6 @@ def get_entries(path, plugin_objects):
)
return plugin_objects
if __name__ == '__main__':
main()

View File

@@ -3,7 +3,6 @@
import subprocess
import os
from datetime import datetime
import sys
@@ -11,12 +10,12 @@ import sys
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, Plugin_Object
from logger import mylog, Logger
from helper import get_setting_value
import conf
from pytz import timezone
from const import logPath
from plugin_helper import Plugin_Objects, Plugin_Object # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
@@ -31,6 +30,7 @@ LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', ['[DHCPSRVS] In script'])
@@ -101,5 +101,6 @@ def main():
except Exception as e:
mylog('verbose', ['[DHCPSRVS] Error in main:', str(e)])
if __name__ == '__main__':
main()

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
import os
import sys
import subprocess
@@ -8,14 +7,14 @@ import subprocess
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
from database import DB
from models.device_instance import DeviceInstance
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -65,27 +64,27 @@ def main():
if domain_name != '':
plugin_objects.add_object(
# "MAC", "IP", "Server", "Name"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = dns_server,
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac'])
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = dns_server,
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac']
)
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Script finished'])
return 0
#===============================================================================
# ===============================================================================
# Execute scan
#===============================================================================
def execute_name_lookup (ip, timeout):
# ===============================================================================
def execute_name_lookup(ip, timeout):
"""
Execute the DIG command on IP.
"""
@@ -99,7 +98,13 @@ def execute_name_lookup (ip, timeout):
mylog('verbose', [f'[{pluginName}] DEBUG CMD :', args])
# try runnning a subprocess with a forced (timeout) in case the subprocess hangs
output = subprocess.check_output (args, universal_newlines=True, stderr=subprocess.STDOUT, timeout=(timeout), text=True).strip()
output = subprocess.check_output(
args,
universal_newlines=True,
stderr=subprocess.STDOUT,
timeout=(timeout),
text=True
).strip()
mylog('verbose', [f'[{pluginName}] DEBUG OUTPUT : {output}'])
@@ -116,13 +121,13 @@ def execute_name_lookup (ip, timeout):
except subprocess.TimeoutExpired:
mylog('verbose', [f'[{pluginName}] TIMEOUT - the process forcefully terminated as timeout reached'])
if output == "": # check if the subprocess failed
if output == "": # check if the subprocess failed
mylog('verbose', [f'[{pluginName}] Scan: FAIL - check logs'])
else:
mylog('verbose', [f'[{pluginName}] Scan: SUCCESS'])
return '', ''
if __name__ == '__main__':
main()

View File

@@ -17,11 +17,12 @@ from aiofreepybox.exceptions import NotOpenError, AuthorizationError
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
import conf
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB, DATETIME_PATTERN # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value("TIMEZONE"))
@@ -79,6 +80,7 @@ def map_device_type(type: str):
mylog("minimal", [f"[{pluginName}] Unknown device type: {type}"])
return device_type_map["other"]
async def get_device_data(api_version: int, api_address: str, api_port: int):
# ensure existence of db path
config_base = Path(os.getenv("NETALERTX_CONFIG", "/data/config"))
@@ -149,7 +151,7 @@ def main():
watched1=freebox["name"],
watched2=freebox["operator"],
watched3="Gateway",
watched4=datetime.now,
watched4=timeNowDB(),
extra="",
foreignKey=freebox["mac"],
)
@@ -165,7 +167,7 @@ def main():
watched1=host.get("primary_name", "(unknown)"),
watched2=host.get("vendor_name", "(unknown)"),
watched3=map_device_type(host.get("host_type", "")),
watched4=datetime.fromtimestamp(ip.get("last_time_reachable", 0)),
watched4=datetime.fromtimestamp(ip.get("last_time_reachable", 0)).strftime(DATETIME_PATTERN),
extra="",
foreignKey=mac,
)

View File

@@ -11,14 +11,14 @@ import re
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath
from database import DB
from models.device_instance import DeviceInstance
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -33,12 +33,10 @@ LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}] In script'])
timeout = get_setting_value('ICMP_RUN_TIMEOUT')
args = get_setting_value('ICMP_ARGS')
in_regex = get_setting_value('ICMP_IN_REGEX')
@@ -65,7 +63,6 @@ def main():
if regex_pattern.match(device['devLastIP'])
]
mylog('verbose', [f'[{pluginName}] Devices to PING: {len(filtered_devices)}'])
for device in filtered_devices:
@@ -73,30 +70,30 @@ def main():
mylog('verbose', [f"[{pluginName}] ip: {device['devLastIP']} is_online: {is_online}"])
if is_online:
plugin_objects.add_object(
# "MAC", "IP", "Name", "Output"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = device['devName'],
watched2 = output.replace('\n',''),
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac'])
# "MAC", "IP", "Name", "Output"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = device['devName'],
watched2 = output.replace('\n', ''),
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac']
)
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Script finished'])
return 0
#===============================================================================
# ===============================================================================
# Execute scan
#===============================================================================
def execute_scan (ip, timeout, args):
# ===============================================================================
def execute_scan(ip, timeout, args):
"""
Execute the ICMP command on IP.
"""
@@ -108,12 +105,18 @@ def execute_scan (ip, timeout, args):
try:
# try runnning a subprocess with a forced (timeout) in case the subprocess hangs
output = subprocess.check_output (icmp_args, universal_newlines=True, stderr=subprocess.STDOUT, timeout=(timeout), text=True)
output = subprocess.check_output(
icmp_args,
universal_newlines=True,
stderr=subprocess.STDOUT,
timeout=(timeout),
text=True
)
mylog('verbose', [f'[{pluginName}] DEBUG OUTPUT : {output}'])
# Parse output using case-insensitive regular expressions
#Synology-NAS:/# ping -i 0.5 -c 3 -W 8 -w 9 192.168.1.82
# Synology-NAS:/# ping -i 0.5 -c 3 -W 8 -w 9 192.168.1.82
# PING 192.168.1.82 (192.168.1.82): 56 data bytes
# 64 bytes from 192.168.1.82: seq=0 ttl=64 time=0.080 ms
# 64 bytes from 192.168.1.82: seq=1 ttl=64 time=0.081 ms
@@ -157,10 +160,8 @@ def execute_scan (ip, timeout, args):
return False, output
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -11,13 +11,13 @@ import re
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from utils.datetime_utils import timeNowDB
from logger import mylog, Logger, append_line_to_file
from helper import check_IP_format, get_setting_value
from const import logPath
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger, append_line_to_file # noqa: E402 [flake8 lint suppression]
from helper import check_IP_format, get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -31,9 +31,9 @@ LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
no_internet_ip = '0.0.0.0'
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -41,7 +41,7 @@ def main():
parser = argparse.ArgumentParser(description='Check internet connectivity and IP')
parser.add_argument('prev_ip', action="store", help="Previous IP address to compare against the current IP")
parser.add_argument('DIG_GET_IP_ARG', action="store", help="Arguments for the 'dig' command to retrieve the IP address") # unused
parser.add_argument('DIG_GET_IP_ARG', action="store", help="Arguments for the 'dig' command to retrieve the IP address") # unused
values = parser.parse_args()
@@ -60,10 +60,10 @@ def main():
for i in range(INTRNT_RETRIES + 1):
new_internet_IP, cmd_output = check_internet_IP( PREV_IP, DIG_GET_IP_ARG)
new_internet_IP, cmd_output = check_internet_IP(PREV_IP, DIG_GET_IP_ARG)
if new_internet_IP == no_internet_ip:
time.sleep(1*i) # Exponential backoff strategy
time.sleep(1 * i) # Exponential backoff strategy
else:
retries_needed = i
break
@@ -74,7 +74,7 @@ def main():
mylog('verbose', [f'[{pluginName}] Curl Fallback (new_internet_IP|cmd_output): {new_internet_IP} | {cmd_output}'])
# logging
append_line_to_file (logPath + '/IP_changes.log', '['+str(timeNowDB()) +']\t'+ new_internet_IP +'\n')
append_line_to_file(logPath + '/IP_changes.log', '[' + str(timeNowDB()) + ']\t' + new_internet_IP + '\n')
plugin_objects = Plugin_Objects(RESULT_FILE)
@@ -82,11 +82,12 @@ def main():
primaryId = 'Internet', # MAC (Device Name)
secondaryId = new_internet_IP, # IP Address
watched1 = f'Previous IP: {PREV_IP}',
watched2 = cmd_output.replace('\n',''),
watched2 = cmd_output.replace('\n', ''),
watched3 = retries_needed,
watched4 = 'Gateway',
extra = f'Previous IP: {PREV_IP}',
foreignKey = 'Internet')
foreignKey = 'Internet'
)
plugin_objects.write_result_file()
@@ -95,10 +96,10 @@ def main():
return 0
#===============================================================================
# ===============================================================================
# INTERNET IP CHANGE
#===============================================================================
def check_internet_IP ( PREV_IP, DIG_GET_IP_ARG ):
# ===============================================================================
def check_internet_IP(PREV_IP, DIG_GET_IP_ARG):
# Get Internet IP
mylog('verbose', [f'[{pluginName}] - Retrieving Internet IP'])
@@ -109,7 +110,7 @@ def check_internet_IP ( PREV_IP, DIG_GET_IP_ARG ):
# Check previously stored IP
previous_IP = no_internet_ip
if PREV_IP is not None and len(PREV_IP) > 0 :
if PREV_IP is not None and len(PREV_IP) > 0 :
previous_IP = PREV_IP
mylog('verbose', [f'[{pluginName}] previous_IP : {previous_IP}'])
@@ -117,22 +118,22 @@ def check_internet_IP ( PREV_IP, DIG_GET_IP_ARG ):
return internet_IP, cmd_output
#-------------------------------------------------------------------------------
def get_internet_IP (DIG_GET_IP_ARG):
# -------------------------------------------------------------------------------
def get_internet_IP(DIG_GET_IP_ARG):
cmd_output = ''
# Using 'dig'
dig_args = ['dig', '+short'] + DIG_GET_IP_ARG.strip().split()
try:
cmd_output = subprocess.check_output (dig_args, universal_newlines=True)
cmd_output = subprocess.check_output(dig_args, universal_newlines=True)
mylog('verbose', [f'[{pluginName}] DIG result : {cmd_output}'])
except subprocess.CalledProcessError as e:
mylog('verbose', [e.output])
cmd_output = '' # no internet
cmd_output = '' # no internet
# Check result is an IP
IP = check_IP_format (cmd_output)
IP = check_IP_format(cmd_output)
# Handle invalid response
if IP == '':
@@ -140,7 +141,8 @@ def get_internet_IP (DIG_GET_IP_ARG):
return IP, cmd_output
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def fallback_check_ip():
"""Fallback mechanism using `curl ifconfig.me/ip`."""
try:
@@ -155,8 +157,9 @@ def fallback_check_ip():
mylog('none', [f'[{pluginName}] Fallback curl exception: {e}'])
return no_internet_ip, f'Fallback via curl exception: "{e}"'
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env python
import argparse
import os
import sys
import speedtest
@@ -9,13 +8,13 @@ import speedtest
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from utils.datetime_utils import timeNowDB
from logger import mylog, Logger
from helper import get_setting_value
import conf
from pytz import timezone
from const import logPath
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -28,13 +27,11 @@ pluginName = 'INTRSPD'
LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', ['[INTRSPD] In script'])
parser = argparse.ArgumentParser(description='Speedtest Plugin for NetAlertX')
values = parser.parse_args()
plugin_objects = Plugin_Objects(RESULT_FILE)
speedtest_result = run_speedtest()
plugin_objects.add_object(
@@ -49,6 +46,7 @@ def main():
)
plugin_objects.write_result_file()
def run_speedtest():
try:
st = speedtest.Speedtest(secure=True)
@@ -69,5 +67,6 @@ def run_speedtest():
'upload_speed': -1,
}
if __name__ == '__main__':
sys.exit(main())

View File

@@ -11,11 +11,11 @@ from functools import reduce
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
import conf
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -34,7 +34,6 @@ RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
plugin_objects = Plugin_Objects(RESULT_FILE)
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -59,22 +58,22 @@ def main():
if len(neighbors) > 0:
for device in neighbors:
plugin_objects.add_object(
primaryId = device['mac'],
secondaryId = device['ip'],
watched4 = device['last_seen'],
plugin_objects.add_object(
primaryId = device['mac'],
secondaryId = device['ip'],
watched4 = device['last_seen'],
# The following are always unknown
watched1 = device['hostname'], # don't use these --> handleEmpty(device['hostname']),
watched2 = device['vendor'], # handleEmpty(device['vendor']),
watched3 = device['device_type'], # handleEmpty(device['device_type']),
extra = '',
foreignKey = "" #device['mac']
# helpVal1 = "Something1", # Optional Helper values to be passed for mapping into the app
# helpVal2 = "Something1", # If you need to use even only 1, add the remaining ones too
# helpVal3 = "Something1", # and set them to 'null'. Check the the docs for details:
# helpVal4 = "Something1", # https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS_DEV.md
)
# The following are always unknown
watched1 = device['hostname'], # don't use these --> handleEmpty(device['hostname']),
watched2 = device['vendor'], # don't use these --> handleEmpty(device['vendor']),
watched3 = device['device_type'], # don't use these --> handleEmpty(device['device_type']),
extra = '',
foreignKey = "" # device['mac']
# helpVal1 = "Something1", # Optional Helper values to be passed for mapping into the app
# helpVal2 = "Something1", # If you need to use even only 1, add the remaining ones too
# helpVal3 = "Something1", # and set them to 'null'. Check the the docs for details:
# helpVal4 = "Something1", # https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS_DEV.md
)
mylog('verbose', [f'[{pluginName}] New entries: "{len(neighbors)}"'])
@@ -83,6 +82,7 @@ def main():
return 0
def parse_neighbors(raw_neighbors: list[str]):
neighbors = []
for line in raw_neighbors:
@@ -111,6 +111,7 @@ def is_multicast(ip):
prefixes = ['ff', '224', '231', '232', '233', '234', '238', '239']
return reduce(lambda acc, prefix: acc or ip.startswith(prefix), prefixes, False)
# retrieve data
def get_neighbors(interfaces):
@@ -136,11 +137,11 @@ def get_neighbors(interfaces):
mylog('verbose', [f'[{pluginName}] Scanning interface succeded: "{interface}"'])
except subprocess.CalledProcessError as e:
# An error occurred, handle it
mylog('verbose', [f'[{pluginName}] Scanning interface failed: "{interface}"'])
error_type = type(e).__name__ # Capture the error type
mylog('verbose', [f'[{pluginName}] Scanning interface failed: "{interface}" ({error_type})'])
return results
if __name__ == '__main__':
main()

View File

@@ -7,18 +7,18 @@ INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
pluginName = 'LUCIRPC'
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
try:
from openwrt_luci_rpc import OpenWrtRpc
except:
mylog('error', [f'[{pluginName}] Failed import openwrt_luci_rpc'])
exit()
except ImportError as e:
mylog('error', [f'[{pluginName}] Failed import openwrt_luci_rpc: {e}'])
exit(1)
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -30,6 +30,7 @@ RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
plugin_objects = Plugin_Objects(RESULT_FILE)
def main():
mylog('verbose', [f'[{pluginName}] start script.'])
@@ -59,6 +60,7 @@ def main():
return 0
def get_device_data():
router = OpenWrtRpc(
get_setting_value("LUCIRPC_host"),
@@ -66,7 +68,7 @@ def get_device_data():
get_setting_value("LUCIRPC_password"),
get_setting_value("LUCIRPC_ssl"),
get_setting_value("LUCIRPC_verify_ssl")
)
)
if router.is_logged_in():
mylog('verbose', [f'[{pluginName}] login successfully.'])
@@ -76,5 +78,6 @@ def get_device_data():
device_data = router.get_all_connected_devices(only_reachable=get_setting_value("LUCIRPC_only_reachable"))
return device_data
if __name__ == '__main__':
main()

View File

@@ -8,12 +8,12 @@ from collections import deque
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath
from messaging.in_app import remove_old
import conf
from pytz import timezone
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from messaging.in_app import remove_old # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -28,7 +28,6 @@ LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -65,8 +64,8 @@ def main():
return 0
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -7,14 +7,14 @@ import sys
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath
import conf
from pytz import timezone
from librouteros import connect
from librouteros.exceptions import TrapError
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
from librouteros import connect # noqa: E402 [flake8 lint suppression]
from librouteros.exceptions import TrapError # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -29,7 +29,6 @@ LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -72,7 +71,7 @@ def get_entries(plugin_objects: Plugin_Objects) -> Plugin_Objects:
status = lease.get('status')
device_name = comment or host_name or "(unknown)"
mylog('verbose', [f"ID: {lease_id}, Address: {address}, MAC Address: {mac_address}, Host Name: {host_name}, Comment: {comment}, Last Seen: {last_seen}, Status: {status}"])
mylog('verbose', f"ID: {lease_id}, Address: {address}, MAC: {mac_address}, Host Name: {host_name}, Comment: {comment}, Last Seen: {last_seen}, Status: {status}")
if (status == "bound"):
plugin_objects.add_object(
@@ -96,8 +95,8 @@ def get_entries(plugin_objects: Plugin_Objects) -> Plugin_Objects:
return plugin_objects
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -8,14 +8,14 @@ import subprocess
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
from database import DB
from models.device_instance import DeviceInstance
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -34,7 +34,6 @@ RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
plugin_objects = Plugin_Objects(RESULT_FILE)
def main():
mylog('verbose', [f'[{pluginName}] In script'])
@@ -67,27 +66,28 @@ def main():
if domain_name != '':
plugin_objects.add_object(
# "MAC", "IP", "Server", "Name"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = dns_server,
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac'])
# "MAC", "IP", "Server", "Name"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = dns_server,
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac']
)
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Script finished'])
return 0
#===============================================================================
# ===============================================================================
# Execute scan
#===============================================================================
def execute_name_lookup (ip, timeout):
# ===============================================================================
def execute_name_lookup(ip, timeout):
"""
Execute the NBTSCAN command on IP.
"""
@@ -101,7 +101,13 @@ def execute_name_lookup (ip, timeout):
mylog('verbose', [f'[{pluginName}] DEBUG CMD :', args])
# try runnning a subprocess with a forced (timeout) in case the subprocess hangs
output = subprocess.check_output (args, universal_newlines=True, stderr=subprocess.STDOUT, timeout=(timeout), text=True)
output = subprocess.check_output(
args,
universal_newlines=True,
stderr=subprocess.STDOUT,
timeout=(timeout),
text=True
)
mylog('verbose', [f'[{pluginName}] DEBUG OUTPUT : {output}'])
@@ -112,7 +118,6 @@ def execute_name_lookup (ip, timeout):
lines = output.splitlines()
# Look for the first line containing a valid NetBIOS name entry
index = 0
for line in lines:
if 'Doing NBT name scan' not in line and ip in line:
# Split the line and extract the primary NetBIOS name
@@ -122,7 +127,6 @@ def execute_name_lookup (ip, timeout):
else:
mylog('verbose', [f'[{pluginName}] ⚠ ERROR - Unexpected output format: {line}'])
mylog('verbose', [f'[{pluginName}] Domain Name: {domain_name}'])
return domain_name, dns_server
@@ -137,13 +141,16 @@ def execute_name_lookup (ip, timeout):
except subprocess.TimeoutExpired:
mylog('verbose', [f'[{pluginName}] TIMEOUT - the process forcefully terminated as timeout reached'])
if output == "": # check if the subprocess failed
if output == "": # check if the subprocess failed
mylog('verbose', [f'[{pluginName}] Scan: FAIL - check logs'])
else:
mylog('verbose', [f'[{pluginName}] Scan: SUCCESS'])
return '', ''
# ===============================================================================
# BEGIN
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -419,6 +419,41 @@
}
]
},
{
"function": "IP_MATCH_NAME",
"type": {
"dataType": "boolean",
"elements": [
{
"elementType": "input",
"elementOptions": [
{
"type": "checkbox"
}
],
"transformers": []
}
]
},
"default_value": true,
"options": [],
"localized": [
"name",
"description"
],
"name": [
{
"language_code": "en_us",
"string": "Name IP match"
}
],
"description": [
{
"language_code": "en_us",
"string": "If checked, the application will guess the name also by IPs, not only MACs. This approach works if your IPs are mostly static."
}
]
},
{
"function": "replace_preset_icon",
"type": {

View File

@@ -448,7 +448,7 @@
"description": [
{
"language_code": "en_us",
"string": "When scanning remote networks, NMAP can only retrieve the IP address, not the MAC address. Enabling this setting generates a fake MAC address from the IP address to track devices, but it may cause inconsistencies if IPs change or devices are rediscovered. Static IPs are recommended. Device type and icon will not be detected correctly. When unchecked, devices with empty MAC addresses are skipped."
"string": "When scanning remote networks, NMAP can only retrieve the IP address, not the MAC address. Enabling the FAKE_MAC setting generates a fake MAC address from the IP address to track devices, but it may cause inconsistencies if IPs change or devices are re-discovered with a different MAC. Static IPs are recommended. Device type and icon might not be detected correctly and some plugins might fail if they depend on a valid MAC address. When unchecked, devices with empty MAC addresses are skipped."
}
]
}

View File

@@ -13,13 +13,12 @@ import nmap
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath
from database import DB
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -46,7 +45,6 @@ def main():
mylog('verbose', [f'[{pluginName}] subnets: ', subnets])
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
@@ -57,26 +55,27 @@ def main():
for device in unique_devices:
plugin_objects.add_object(
# "MAC", "IP", "Name", "Vendor", "Interface"
primaryId = device['mac'].lower(),
secondaryId = device['ip'],
watched1 = device['name'],
watched2 = device['vendor'],
watched3 = device['interface'],
watched4 = '',
extra = '',
foreignKey = device['mac'])
# "MAC", "IP", "Name", "Vendor", "Interface"
primaryId = device['mac'].lower(),
secondaryId = device['ip'],
watched1 = device['name'],
watched2 = device['vendor'],
watched3 = device['interface'],
watched4 = '',
extra = '',
foreignKey = device['mac']
)
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Script finished'])
return 0
#===============================================================================
# ===============================================================================
# Execute scan
#===============================================================================
# ===============================================================================
def execute_scan(subnets_list, timeout, fakeMac, args):
devices_list = []
@@ -103,18 +102,21 @@ def execute_scan(subnets_list, timeout, fakeMac, args):
return devices_list
def execute_scan_on_interface (interface, timeout, args):
def execute_scan_on_interface(interface, timeout, args):
# Remove unsupported VLAN flags
interface = re.sub(r'--vlan=\S+', '', interface).strip()
# Prepare command arguments
scan_args = args.split() + interface.replace('--interface=','-e ').split()
scan_args = args.split() + interface.replace('--interface=', '-e ').split()
mylog('verbose', [f'[{pluginName}] scan_args: ', scan_args])
try:
result = subprocess.check_output(scan_args, universal_newlines=True)
result = subprocess.check_output(
scan_args,
universal_newlines=True,
timeout=timeout
)
except subprocess.CalledProcessError as e:
error_type = type(e).__name__
result = ""
@@ -138,7 +140,6 @@ def parse_nmap_xml(xml_output, interface, fakeMac):
ip = nm[host]['addresses']['ipv4'] if 'ipv4' in nm[host]['addresses'] else ''
mac = nm[host]['addresses']['mac'] if 'mac' in nm[host]['addresses'] else ''
mylog('verbose', [f'[{pluginName}] nm[host]: ', nm[host]])
vendor = ''
@@ -148,10 +149,8 @@ def parse_nmap_xml(xml_output, interface, fakeMac):
for key, value in nm[host]['vendor'].items():
vendor = value
break
# Log debug information
mylog('verbose', [f"[{pluginName}] Hostname: {hostname}, IP: {ip}, MAC: {mac}, Vendor: {vendor}"])
@@ -172,7 +171,6 @@ def parse_nmap_xml(xml_output, interface, fakeMac):
# MAC or IP missing
mylog('verbose', [f"[{pluginName}] Skipping: {hostname}, IP or MAC missing, or NMAPDEV_GENERATE_MAC setting not enabled"])
except Exception as e:
mylog('verbose', [f"[{pluginName}] Error parsing nmap XML: ", str(e)])
@@ -184,12 +182,13 @@ def string_to_mac_hash(input_string):
sha256_hash = hashlib.sha256(input_string.encode()).hexdigest()
# Take the first 12 characters of the hash and format as a MAC address
mac_hash = ':'.join(sha256_hash[i:i+2] for i in range(0, 12, 2))
mac_hash = ':'.join(sha256_hash[i:i + 2] for i in range(0, 12, 2))
return mac_hash
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -9,13 +9,13 @@ import subprocess
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, decodeBase64
from logger import mylog, Logger, append_line_to_file
from utils.datetime_utils import timeNowDB
from helper import get_setting_value
from const import logPath
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger, append_line_to_file # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -29,33 +29,60 @@ LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
#-------------------------------------------------------------------------------
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
# -------------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(description='Scan ports of devices specified by IP addresses')
parser.add_argument('ips', nargs='+', help="list of IPs to scan")
parser.add_argument('macs', nargs='+', help="list of MACs related to the supplied IPs in the same order")
parser.add_argument('timeout', nargs='+', help="timeout")
parser.add_argument('args', nargs='+', help="args")
values = parser.parse_args()
parser = argparse.ArgumentParser(
description='Scan ports of devices specified by IP addresses'
)
# Plugin_Objects is a class that reads data from the RESULT_FILE
# and returns a list of results.
plugin_objects = Plugin_Objects(RESULT_FILE)
# Accept ANY key=value pairs
parser.add_argument('params', nargs='+', help="key=value style params")
# Print a message to indicate that the script is starting.
mylog('debug', [f'[{pluginName}] In script '])
raw = parser.parse_args()
# Printing the params list to check its content.
mylog('debug', [f'[{pluginName}] values.ips: ', values.ips])
mylog('debug', [f'[{pluginName}] values.macs: ', values.macs])
mylog('debug', [f'[{pluginName}] values.timeout: ', values.timeout])
mylog('debug', [f'[{pluginName}] values.args: ', values.args])
try:
args = parse_kv_args(raw.params)
except ValueError as e:
mylog('error', [f"[{pluginName}] Argument error: {e}"])
sys.exit(1)
argsDecoded = decodeBase64(values.args[0].split('=b')[1])
# Required keys
required = ['ips', 'macs']
for key in required:
if key not in args:
mylog('error', [f"[{pluginName}] Missing required parameter: {key}"])
sys.exit(1)
mylog('debug', [f'[{pluginName}] argsDecoded: ', argsDecoded])
# Parse lists
ip_list = safe_split_list(args['ips'], "ips")
mac_list = safe_split_list(args['macs'], "macs")
entries = performNmapScan(values.ips[0].split('=')[1].split(','), values.macs[0].split('=')[1].split(',') , values.timeout[0].split('=')[1], argsDecoded)
if len(ip_list) != len(mac_list):
mylog('error', [
f"[{pluginName}] Mismatch: {len(ip_list)} IPs but {len(mac_list)} MACs"
])
sys.exit(1)
# Optional
timeout = int(args.get("timeout", get_setting_value("NMAP_RUN_TIMEOUT")))
NMAP_ARGS = get_setting_value("NMAP_ARGS")
mylog('debug', [f'[{pluginName}] Parsed IPs: {ip_list}'])
mylog('debug', [f'[{pluginName}] Parsed MACs: {mac_list}'])
mylog('debug', [f'[{pluginName}] Timeout: {timeout}'])
mylog('debug', [f'[{pluginName}] NMAP_ARGS: {NMAP_ARGS}'])
entries = performNmapScan(
ip_list,
mac_list,
timeout,
NMAP_ARGS
)
mylog('verbose', [f'[{pluginName}] Total number of ports found by NMAP: ', len(entries)])
@@ -74,8 +101,8 @@ def main():
plugin_objects.write_result_file()
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
class nmap_entry:
def __init__(self, ip, mac, time, port, state, service, name = '', extra = '', index = 0):
self.ip = ip
@@ -86,10 +113,41 @@ class nmap_entry:
self.service = service
self.extra = extra
self.index = index
self.hash = str(mac) + str(port)+ str(state)+ str(service)
self.hash = str(mac) + str(port) + str(state) + str(service)
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def parse_kv_args(raw_args):
"""
Converts ['ips=a,b,c', 'macs=x,y,z', 'timeout=5'] to a dict.
Ignores unknown keys.
"""
parsed = {}
for item in raw_args:
if '=' not in item:
mylog('none', [f"[{pluginName}] Scan: Invalid parameter (missing '='): {item}"])
key, value = item.split('=', 1)
if key in parsed:
mylog('none', [f"[{pluginName}] Scan: Duplicate parameter supplied: {key}"])
parsed[key] = value
return parsed
# -------------------------------------------------------------------------------
def safe_split_list(value, keyname):
"""Split comma list safely and ensure no empty items."""
items = [x.strip() for x in value.split(',') if x.strip()]
if not items:
mylog('none', [f"[{pluginName}] Scan: {keyname} list is empty or invalid"])
return items
# -------------------------------------------------------------------------------
def performNmapScan(deviceIPs, deviceMACs, timeoutSec, args):
"""
run nmap scan on a list of devices
@@ -99,15 +157,12 @@ def performNmapScan(deviceIPs, deviceMACs, timeoutSec, args):
# collect ports / new Nmap Entries
newEntriesTmp = []
if len(deviceIPs) > 0:
devTotal = len(deviceIPs)
mylog('verbose', [f'[{pluginName}] Scan: Nmap for max ', str(timeoutSec), 's ('+ str(round(int(timeoutSec) / 60, 1)) +'min) per device'])
mylog('verbose', ["[NMAP Scan] Estimated max delay: ", (devTotal * int(timeoutSec)), 's ', '(', round((devTotal * int(timeoutSec))/60,1) , 'min)' ])
mylog('verbose', [f'[{pluginName}] Scan: Nmap for max ', str(timeoutSec), 's (' + str(round(int(timeoutSec) / 60, 1)) + 'min) per device'])
mylog('verbose', ["[NMAP Scan] Estimated max delay: ", (devTotal * int(timeoutSec)), 's ', '(', round((devTotal * int(timeoutSec)) / 60, 1) , 'min)'])
devIndex = 0
for ip in deviceIPs:
@@ -116,32 +171,34 @@ def performNmapScan(deviceIPs, deviceMACs, timeoutSec, args):
# prepare arguments from user supplied ones
nmapArgs = ['nmap'] + args.split() + [ip]
progress = ' (' + str(devIndex+1) + '/' + str(devTotal) + ')'
progress = ' (' + str(devIndex + 1) + '/' + str(devTotal) + ')'
try:
# try runnning a subprocess with a forced (timeout) in case the subprocess hangs
output = subprocess.check_output (nmapArgs, universal_newlines=True, stderr=subprocess.STDOUT, timeout=(float(timeoutSec)))
output = subprocess.check_output(
nmapArgs,
universal_newlines=True,
stderr=subprocess.STDOUT,
timeout=(float(timeoutSec))
)
except subprocess.CalledProcessError as e:
# An error occured, handle it
mylog('none', ["[NMAP Scan] " ,e.output])
mylog('none', ["[NMAP Scan] ", e.output])
mylog('none', ["[NMAP Scan] ⚠ ERROR - Nmap Scan - check logs", progress])
except subprocess.TimeoutExpired:
mylog('verbose', [f'[{pluginName}] Nmap TIMEOUT - the process forcefully terminated as timeout reached for ', ip, progress])
if output == "": # check if the subprocess failed
mylog('minimal', [f'[{pluginName}] Nmap FAIL for ', ip, progress ,' check logs for details'])
if output == "": # check if the subprocess failed
mylog('minimal', [f'[{pluginName}] Nmap FAIL for ', ip, progress, ' check logs for details'])
else:
mylog('verbose', [f'[{pluginName}] Nmap SUCCESS for ', ip, progress])
# check the last run output
newLines = output.split('\n')
# regular logging
for line in newLines:
append_line_to_file (logPath + '/app_nmap.log', line +'\n')
append_line_to_file(logPath + '/app_nmap.log', line + '\n')
index = 0
startCollecting = False
@@ -149,34 +206,28 @@ def performNmapScan(deviceIPs, deviceMACs, timeoutSec, args):
newPortsPerDevice = 0
for line in newLines:
if 'Starting Nmap' in line:
if len(newLines) > index+1 and 'Note: Host seems down' in newLines[index+1]:
break # this entry is empty
if len(newLines) > index + 1 and 'Note: Host seems down' in newLines[index + 1]:
break # this entry is empty
elif 'PORT' in line and 'STATE' in line and 'SERVICE' in line:
startCollecting = True
elif 'PORT' in line and 'STATE' in line and 'SERVICE' in line:
startCollecting = False # end reached
startCollecting = False # end reached
elif startCollecting and len(line.split()) == 3:
newEntriesTmp.append(nmap_entry(ip, deviceMACs[devIndex], timeNowDB(), line.split()[0], line.split()[1], line.split()[2]))
newPortsPerDevice += 1
elif 'Nmap done' in line:
duration = line.split('scanned in ')[1]
mylog('verbose', [f'[{pluginName}] {newPortsPerDevice} ports found on {deviceMACs[devIndex]}'])
mylog('verbose', [f'[{pluginName}] {newPortsPerDevice} ports found on {deviceMACs[devIndex]} after {duration}'])
index += 1
devIndex += 1
#end for loop
return newEntriesTmp
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -11,14 +11,14 @@ import re
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath
from database import DB
from models.device_instance import DeviceInstance
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -33,12 +33,10 @@ LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', [f'[{pluginName}] In script'])
timeout = get_setting_value('NSLOOKUP_RUN_TIMEOUT')
# Create a database connection
@@ -67,27 +65,28 @@ def main():
if domain_name != '':
plugin_objects.add_object(
# "MAC", "IP", "Server", "Name"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = dns_server,
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac'])
# "MAC", "IP", "Server", "Name"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = dns_server,
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac']
)
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Script finished'])
return 0
#===============================================================================
# ===============================================================================
# Execute scan
#===============================================================================
def execute_nslookup (ip, timeout):
# ===============================================================================
def execute_nslookup(ip, timeout):
"""
Execute the NSLOOKUP command on IP.
"""
@@ -99,7 +98,13 @@ def execute_nslookup (ip, timeout):
try:
# try runnning a subprocess with a forced (timeout) in case the subprocess hangs
output = subprocess.check_output (nslookup_args, universal_newlines=True, stderr=subprocess.STDOUT, timeout=(timeout), text=True)
output = subprocess.check_output(
nslookup_args,
universal_newlines=True,
stderr=subprocess.STDOUT,
timeout=(timeout),
text=True
)
domain_name = ''
dns_server = ''
@@ -110,7 +115,6 @@ def execute_nslookup (ip, timeout):
domain_pattern = re.compile(r'name\s*=\s*([^\s]+)', re.IGNORECASE)
server_pattern = re.compile(r'Server:\s+(.+)', re.IGNORECASE)
domain_match = domain_pattern.search(output)
server_match = server_pattern.search(output)
@@ -136,19 +140,15 @@ def execute_nslookup (ip, timeout):
except subprocess.TimeoutExpired:
mylog('verbose', [f'[{pluginName}] TIMEOUT - the process forcefully terminated as timeout reached'])
if output == "": # check if the subprocess failed
tmp = 1 # can't have empty
# mylog('verbose', [f'[{pluginName}] Scan: FAIL - check logs'])
else:
if output != "": # check if the subprocess failed
mylog('verbose', [f'[{pluginName}] Scan: SUCCESS'])
return '', ''
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -15,10 +15,9 @@ __version__ = "1.3" # fix detection of the default gateway IP address that woul
# try to identify and populate their connections by switch/accesspoints and ports/SSID
# try to differentiate root bridges from accessory
#
# sample code to update unbound on opnsense - for reference...
# curl -X POST -d '{"host":{"enabled":"1","hostname":"test","domain":"testdomain.com","rr":"A","mxprio":"","mx":"","server":"10.0.1.1","description":""}}' -H "Content-Type: application/json" -k -u $OPNS_KEY:$OPNS_SECRET https://$IPFW/api/unbound/settings/AddHostOverride
# curl -X POST -d '{"host":{"enabled":"1","hostname":"test","domain":"testdomain.com","rr":"A","mxprio":"","mx":"","server":"10.0.1.1","description":""}}'\
# -H "Content-Type: application/json" -k -u $OPNS_KEY:$OPNS_SECRET https://$IPFW/api/unbound/settings/AddHostOverride
#
import os
import sys
@@ -35,12 +34,12 @@ import multiprocessing
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
from pytz import timezone
import conf
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -87,8 +86,6 @@ cMAC, cIP, cNAME, cSWITCH_AP, cPORT_SSID = range(5)
OMDLOGLEVEL = "debug"
#
# translate MAC address from standard ieee model to ietf draft
# AA-BB-CC-DD-EE-FF to aa:bb:cc:dd:ee:ff
# tplink adheres to ieee, Nax adheres to ietf
@@ -137,12 +134,12 @@ def callomada(myargs):
omada_output = ""
retries = 2
while omada_output == "" and retries > 1:
while omada_output == "" and retries > 0:
retries = retries - 1
try:
mf = io.StringIO()
with redirect_stdout(mf):
bar = omada(myargs)
omada(myargs)
omada_output = mf.getvalue()
except Exception:
mylog(
@@ -186,55 +183,71 @@ def add_uplink(
sadevices_linksbymac,
port_byswitchmac_byclientmac,
):
# Ensure switch_mac exists in device_data_bymac
# Ensure switch exists
if switch_mac not in device_data_bymac:
mylog("none", [f"[{pluginName}] switch_mac '{switch_mac}' not found in device_data_bymac"])
return
# Ensure SWITCH_AP key exists in the dictionary
if SWITCH_AP not in device_data_bymac[switch_mac]:
mylog("none", [f"[{pluginName}] Missing key '{SWITCH_AP}' in device_data_bymac[{switch_mac}]"])
dev_switch = device_data_bymac[switch_mac]
# Ensure list is long enough to contain SWITCH_AP index
if len(dev_switch) <= SWITCH_AP:
mylog("none", [f"[{pluginName}] SWITCH_AP index {SWITCH_AP} missing in record for {switch_mac}"])
return
# Check if uplink should be added
if device_data_bymac[switch_mac][SWITCH_AP] in [None, "null"]:
device_data_bymac[switch_mac][SWITCH_AP] = uplink_mac
# Add uplink only if empty
if dev_switch[SWITCH_AP] in (None, "null"):
dev_switch[SWITCH_AP] = uplink_mac
# Ensure uplink_mac exists in device_data_bymac
# Validate uplink_mac exists
if uplink_mac not in device_data_bymac:
mylog("none", [f"[{pluginName}] uplink_mac '{uplink_mac}' not found in device_data_bymac"])
return
# Determine port to uplink
if (
device_data_bymac[switch_mac].get(TYPE) == "Switch"
and device_data_bymac[uplink_mac].get(TYPE) == "Switch"
):
dev_uplink = device_data_bymac[uplink_mac]
# Get TYPE safely
switch_type = dev_switch[TYPE] if len(dev_switch) > TYPE else None
uplink_type = dev_uplink[TYPE] if len(dev_uplink) > TYPE else None
# Switch-to-switch link → use port mapping
if switch_type == "Switch" and uplink_type == "Switch":
port_to_uplink = port_byswitchmac_byclientmac.get(switch_mac, {}).get(uplink_mac)
if port_to_uplink is None:
mylog("none", [f"[{pluginName}] Missing port info for switch_mac '{switch_mac}' and uplink_mac '{uplink_mac}'"])
mylog("none", [
f"[{pluginName}] Missing port info for {switch_mac}{uplink_mac}"
])
return
else:
port_to_uplink = device_data_bymac[uplink_mac].get(PORT_SSID)
# Other device types → read PORT_SSID index
if len(dev_uplink) <= PORT_SSID:
mylog("none", [
f"[{pluginName}] PORT_SSID index missing for uplink {uplink_mac}"
])
return
port_to_uplink = dev_uplink[PORT_SSID]
# Assign port to switch_mac
device_data_bymac[switch_mac][PORT_SSID] = port_to_uplink
# Assign port to switch
if len(dev_switch) > PORT_SSID:
dev_switch[PORT_SSID] = port_to_uplink
else:
mylog("none", [
f"[{pluginName}] PORT_SSID index missing in switch {switch_mac}"
])
# Recursively add uplinks for linked devices
# Process children recursively
for link in sadevices_linksbymac.get(switch_mac, []):
if (
link in device_data_bymac
and device_data_bymac[link].get(SWITCH_AP) in [None, "null"]
and device_data_bymac[switch_mac].get(TYPE) == "Switch"
link in device_data_bymac and len(device_data_bymac[link]) > SWITCH_AP and device_data_bymac[link][SWITCH_AP] in (None, "null") and len(dev_switch) > TYPE
):
add_uplink(
switch_mac,
link,
device_data_bymac,
sadevices_linksbymac,
port_byswitchmac_byclientmac,
)
if dev_switch[TYPE] == "Switch":
add_uplink(
switch_mac,
link,
device_data_bymac,
sadevices_linksbymac,
port_byswitchmac_byclientmac,
)
# ----------------------------------------------
@@ -324,16 +337,16 @@ def main():
)
mymac = ieee2ietf_mac_formater(device[MAC])
plugin_objects.add_object(
primaryId=mymac, # MAC
secondaryId=device[IP], # IP
watched1=device[NAME], # NAME/HOSTNAME
watched2=ParentNetworkNode, # PARENT NETWORK NODE MAC
watched3=myport, # PORT
watched4=myssid, # SSID
primaryId=mymac, # MAC
secondaryId=device[IP], # IP
watched1=device[NAME], # NAME/HOSTNAME
watched2=ParentNetworkNode, # PARENT NETWORK NODE MAC
watched3=myport, # PORT
watched4=myssid, # SSID
extra=device[TYPE],
# omada_site, # SITENAME (cur_NetworkSite) or VENDOR (cur_Vendor) (PICK one and adjust config.json -> "column": "Extra")
foreignKey=device[MAC].lower().replace("-", ":"),
) # usually MAC
) # usually MAC
mylog(
"verbose",
@@ -369,7 +382,6 @@ def get_omada_devices_details(msadevice_data):
mswitch_dump = callomada(["-t", "myomada", "switch", "-d", mthisswitch])
else:
mswitch_detail = ""
nswitch_dump = ""
return mswitch_detail, mswitch_dump
@@ -414,7 +426,6 @@ def get_device_data(omada_clients_output, switches_and_aps, device_handler):
# 17:27:10 [<unique_prefix>] token: "['1A-2B-3C-4D-5E-6F', '192.168.0.217', '1A-2B-3C-4D-5E-6F', '17', '40-AE-30-A5-A7-50, 'Switch']"
# constants
sadevices_macbyname = {}
sadevices_macbymac = {}
sadevices_linksbymac = {}
port_byswitchmac_byclientmac = {}
device_data_bymac = {}
@@ -556,11 +567,11 @@ def get_device_data(omada_clients_output, switches_and_aps, device_handler):
#
naxname = real_naxname
if real_naxname != None:
if real_naxname is not None:
if "(" in real_naxname:
# removing parenthesis and domains from the name
naxname = real_naxname.split("(")[0]
if naxname != None and "." in naxname:
if naxname is not None and "." in naxname:
naxname = naxname.split(".")[0]
if naxname in (None, "null", ""):
naxname = (

View File

@@ -25,7 +25,6 @@ import sys
import urllib3
import requests
import time
import datetime
import pytz
from datetime import datetime
@@ -35,11 +34,11 @@ from typing import Literal, Any, Dict
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, is_typical_router_ip, is_mac
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
import conf
from plugin_helper import Plugin_Objects, is_typical_router_ip, is_mac # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = pytz.timezone(get_setting_value('TIMEZONE'))
@@ -176,7 +175,10 @@ class OmadaHelper:
# If it's not a gateway try to assign parent node MAC
if data.get("type", "") != "gateway":
parent_mac = OmadaHelper.normalize_mac(data.get("uplinkDeviceMac"))
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and parent_mac.get("response_type") == "success" else ""
resp_type = parent_mac.get("response_type")
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and resp_type == "success" else ""
# Applicable only for CLIENT
if input_type == "client":
@@ -185,15 +187,26 @@ class OmadaHelper:
# Try to assign parent node MAC and PORT/SSID to the CLIENT
if data.get("connectDevType", "") == "gateway":
parent_mac = OmadaHelper.normalize_mac(data.get("gatewayMac"))
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and parent_mac.get("response_type") == "success" else ""
resp_type = parent_mac.get("response_type")
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and resp_type == "success" else ""
entry["parent_node_port"] = data.get("port", "")
elif data.get("connectDevType", "") == "switch":
parent_mac = OmadaHelper.normalize_mac(data.get("switchMac"))
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and parent_mac.get("response_type") == "success" else ""
resp_type = parent_mac.get("response_type")
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and resp_type == "success" else ""
entry["parent_node_port"] = data.get("port", "")
elif data.get("connectDevType", "") == "ap":
parent_mac = OmadaHelper.normalize_mac(data.get("apMac"))
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and parent_mac.get("response_type") == "success" else ""
resp_type = parent_mac.get("response_type")
entry["parent_node_mac_address"] = parent_mac.get("response_result") if isinstance(parent_mac, dict) and resp_type == "success" else ""
entry["parent_node_ssid"] = data.get("ssid", "")
# Add the entry to the result
@@ -253,7 +266,7 @@ class OmadaAPI:
"""Return request headers."""
headers = {"Content-type": "application/json"}
# Add access token to header if requested and available
if include_auth == True:
if include_auth is True:
if not self.access_token:
OmadaHelper.debug("No access token available for headers")
else:
@@ -283,7 +296,7 @@ class OmadaAPI:
OmadaHelper.verbose(f"{method} request error: {str(ex)}")
return OmadaHelper.response("error", f"{method} request failed to endpoint '{endpoint}' with error: {str(ex)}")
def authenticate(self) -> Dict[str, any]:
def authenticate(self) -> Dict[str, Any]:
"""Make an endpoint request to get access token."""
OmadaHelper.verbose("Starting authentication process")

View File

@@ -0,0 +1,133 @@
## Overview - PIHOLEAPI Plugin — Pi-hole v6 Device Import
The **PIHOLEAPI** plugin lets NetAlertX import network devices directly from a **Pi-hole v6** instance.
This turns Pi-hole into an additional discovery source, helping NetAlertX stay aware of devices seen by your DNS server.
The plugin connects to your Pi-holes API and retrieves:
* MAC addresses
* IP addresses
* Hostnames (if available)
* Vendor info
* Last-seen timestamps
NetAlertX then uses this information to match or create devices in your system.
> [!TIP]
> Some tip.
### Quick setup guide
* You are running **Pi-hole v6** or newer.
* The Web UI password in **Pi-hole** is set.
* Local network devices appear under **Settings → Network** in Pi-hole.
No additional Pi-hole configuration is required.
### Usage
- Head to **Settings** > **Plugin name** to adjust the default values.
| Setting Key | Description |
| ---------------------------- | -------------------------------------------------------------------------------- |
| **PIHOLEAPI_URL** | Your Pi-hole base URL. |
| **PIHOLEAPI_PASSWORD** | The Web UI base64 encoded (en-/decoding handled by the app) admin password. |
| **PIHOLEAPI_SSL_VERIFY** | Whether to verify HTTPS certificates. Disable only for self-signed certificates. |
| **PIHOLEAPI_RUN_TIMEOUT** | Request timeout in seconds. |
| **PIHOLEAPI_API_MAXCLIENTS** | Maximum number of devices to request from Pi-hole. Defaults are usually fine. |
### Example Configuration
| Setting Key | Sample Value |
| ---------------------------- | -------------------------------------------------- |
| **PIHOLEAPI_URL** | `http://pi.hole/` |
| **PIHOLEAPI_PASSWORD** | `passw0rd` |
| **PIHOLEAPI_SSL_VERIFY** | `true` |
| **PIHOLEAPI_RUN_TIMEOUT** | `30` |
| **PIHOLEAPI_API_MAXCLIENTS** | `500` |
### ⚠️ Troubleshooting
Below are the most common issues and how to resolve them.
---
#### ❌ Authentication failed
Check the following:
* The Pi-hole URL is correct and includes a trailing slash
* `http://192.168.1.10/`
* `http://192.168.1.10/admin`
* Your Pi-hole password is correct
* You are using **Pi-hole v6**, not v5
* SSL verification matches your setup (disable for self-signed certificates)
---
#### ❌ Connection error
Usually caused by:
* Wrong URL
* Wrong HTTP/HTTPS selection
* Timeout too low
Try:
```
PIHOLEAPI_URL = http://<pi-hole-ip>/
PIHOLEAPI_RUN_TIMEOUT = 60
```
---
#### ❌ No devices imported
Check:
* Pi-hole shows devices under **Settings → Network**
* NetAlertX logs contain:
```
[PIHOLEAPI] Pi-hole API returned data
```
If nothing appears:
* Pi-hole might be returning empty results
* Your network interface list may be empty
* A firewall or reverse proxy is blocking access
Try enabling debug logging:
```
LOG_LEVEL = debug
```
Then re-run the plugin.
---
#### ❌ Wrong or missing hostnames
Pi-hole only reports names it knows from:
* Local DNS
* DHCP leases
* Previously seen queries
If names are missing, confirm they appear in Pi-holes own UI first.
### Notes
- Additional notes, limitations, Author info.
- Version: 1.0.0
- Author: `jokob-sk`, `leiweibau`
- Release Date: `11-2025`
---

View File

@@ -0,0 +1,476 @@
{
"code_name": "pihole_api_scan",
"unique_prefix": "PIHOLEAPI",
"plugin_type": "device_scanner",
"execution_order" : "Layer_0",
"enabled": true,
"data_source": "script",
"mapped_to_table": "CurrentScan",
"data_filters": [
{
"compare_column": "Object_PrimaryID",
"compare_operator": "==",
"compare_field_id": "txtMacFilter",
"compare_js_template": "'{value}'.toString()",
"compare_use_quotes": true
}
],
"show_ui": true,
"localized": ["display_name", "description", "icon"],
"display_name": [
{
"language_code": "en_us",
"string": "PiHole API scan"
}
],
"description": [
{
"language_code": "en_us",
"string": "Imports devices from PiHole via APIv6"
}
],
"icon": [
{
"language_code": "en_us",
"string": "<i class=\"fa fa-search\"></i>"
}
],
"params": [],
"settings": [
{
"function": "RUN",
"events": ["run"],
"type": {
"dataType": "string",
"elements": [
{ "elementType": "select", "elementOptions": [], "transformers": [] }
]
},
"default_value": "disabled",
"options": [
"disabled",
"once",
"schedule",
"always_after_scan"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "When to run"
}
],
"description": [
{
"language_code": "en_us",
"string": "When the plugin should run. Good options are <code>always_after_scan</code>, <code>schedule</code>."
}
]
},
{
"function": "RUN_SCHD",
"type": {
"dataType": "string",
"elements": [
{
"elementType": "span",
"elementOptions": [
{
"cssClasses": "input-group-addon validityCheck"
},
{
"getStringKey": "Gen_ValidIcon"
}
],
"transformers": []
},
{
"elementType": "input",
"elementOptions": [
{
"onChange": "validateRegex(this)"
},
{
"base64Regex": "Xig/OlwqfCg/OlswLTldfFsxLTVdWzAtOV18WzAtOV0rLVswLTldK3xcKi9bMC05XSspKVxzKyg/OlwqfCg/OlswLTldfDFbMC05XXwyWzAtM118WzAtOV0rLVswLTldK3xcKi9bMC05XSspKVxzKyg/OlwqfCg/OlsxLTldfFsxMl1bMC05XXwzWzAxXXxbMC05XSstWzAtOV0rfFwqL1swLTldKykpXHMrKD86XCp8KD86WzEtOV18MVswLTJdfFswLTldKy1bMC05XSt8XCovWzAtOV0rKSlccysoPzpcKnwoPzpbMC02XXxbMC02XS1bMC02XXxcKi9bMC05XSspKSQ="
}
],
"transformers": []
}
]
},
"default_value": "*/5 * * * *",
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Schedule"
}
],
"description": [
{
"language_code": "en_us",
"string": "Only enabled if you select <code>schedule</code> in the <a href=\"#SYNC_RUN\"><code>SYNC_RUN</code> setting</a>. Make sure you enter the schedule in the correct cron-like format (e.g. validate at <a href=\"https://crontab.guru/\" target=\"_blank\">crontab.guru</a>). For example entering <code>0 4 * * *</code> will run the scan after 4 am in the <a onclick=\"toggleAllSettings()\" href=\"#TIMEZONE\"><code>TIMEZONE</code> you set above</a>. Will be run NEXT time the time passes."
}
]
},
{
"function": "URL",
"type": {
"dataType": "string",
"elements": [
{ "elementType": "input", "elementOptions": [], "transformers": [] }
]
},
"maxLength": 50,
"default_value": "",
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Setting name"
}
],
"description": [
{
"language_code": "en_us",
"string": "URL to your PiHole instance, for example <code>http://pi.hole:8080/</code>"
}
]
},
{
"function": "PASSWORD",
"type": {
"dataType": "string",
"elements": [
{
"elementType": "input",
"elementOptions": [{ "type": "password" }],
"transformers": []
}
]
},
"default_value": "",
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Password"
}
],
"description": [
{
"language_code": "en_us",
"string": "PiHole WEB UI password."
}
]
},
{
"function": "VERIFY_SSL",
"type": {
"dataType": "boolean",
"elements": [
{
"elementType": "input",
"elementOptions": [{ "type": "checkbox" }],
"transformers": []
}
]
},
"default_value": false,
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Verify SSL"
}
],
"description": [
{
"language_code": "en_us",
"string": "Enable TLS support. Disable if you are using a self-signed certificate."
}
]
},
{
"function": "API_MAXCLIENTS",
"type": {
"dataType": "integer",
"elements": [
{
"elementType": "input",
"elementOptions": [{ "type": "number" }],
"transformers": []
}
]
},
"default_value": 500,
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Max Clients"
}
],
"description": [
{
"language_code": "en_us",
"string": "Maximum number of devices to import."
}
]
},
{
"function": "CMD",
"type": {
"dataType": "string",
"elements": [
{
"elementType": "input",
"elementOptions": [{ "readonly": "true" }],
"transformers": []
}
]
},
"default_value": "python3 /app/front/plugins/pihole_api_scan/pihole_api_scan.py",
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Command"
}
],
"description": [
{
"language_code": "en_us",
"string": "Command to run. This can not be changed"
}
]
},
{
"function": "RUN_TIMEOUT",
"type": {
"dataType": "integer",
"elements": [
{
"elementType": "input",
"elementOptions": [{ "type": "number" }],
"transformers": []
}
]
},
"default_value": 30,
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Run timeout"
}
],
"description": [
{
"language_code": "en_us",
"string": "Maximum time in seconds to wait for the script to finish. If this time is exceeded the script is aborted."
}
]
}
],
"database_column_definitions": [
{
"column": "Index",
"css_classes": "col-sm-2",
"show": true,
"type": "none",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Index"
}
]
},
{
"column": "Object_PrimaryID",
"mapped_to_column": "cur_MAC",
"css_classes": "col-sm-3",
"show": true,
"type": "device_name_mac",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "MAC (name)"
}
]
},
{
"column": "Object_SecondaryID",
"mapped_to_column": "cur_IP",
"css_classes": "col-sm-2",
"show": true,
"type": "device_ip",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "IP"
}
]
},
{
"column": "Watched_Value1",
"mapped_to_column": "cur_Name",
"css_classes": "col-sm-2",
"show": true,
"type": "label",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Name"
}
]
},
{
"column": "Watched_Value2",
"mapped_to_column": "cur_Vendor",
"css_classes": "col-sm-2",
"show": true,
"type": "label",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Vendor"
}
]
},
{
"column": "Watched_Value3",
"css_classes": "col-sm-2",
"show": true,
"type": "label",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Last Query"
}
]
},
{
"column": "Watched_Value4",
"css_classes": "col-sm-2",
"show": false,
"type": "label",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "N/A"
}
]
},
{
"column": "Dummy",
"mapped_to_column": "cur_ScanMethod",
"mapped_to_column_data": {
"value": "PIHOLEAPI"
},
"css_classes": "col-sm-2",
"show": false,
"type": "label",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Scan method"
}
]
},
{
"column": "DateTimeCreated",
"css_classes": "col-sm-2",
"show": true,
"type": "label",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Created"
}
]
},
{
"column": "DateTimeChanged",
"css_classes": "col-sm-2",
"show": true,
"type": "label",
"default_value": "",
"options": [],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Changed"
}
]
},
{
"column": "Status",
"css_classes": "col-sm-1",
"show": true,
"type": "replace",
"default_value": "",
"options": [
{
"equals": "watched-not-changed",
"replacement": "<div style='text-align:center'><i class='fa-solid fa-square-check'></i><div></div>"
},
{
"equals": "watched-changed",
"replacement": "<div style='text-align:center'><i class='fa-solid fa-triangle-exclamation'></i></div>"
},
{
"equals": "new",
"replacement": "<div style='text-align:center'><i class='fa-solid fa-circle-plus'></i></div>"
},
{
"equals": "missing-in-last-scan",
"replacement": "<div style='text-align:center'><i class='fa-solid fa-question'></i></div>"
}
],
"localized": ["name"],
"name": [
{
"language_code": "en_us",
"string": "Status"
}
]
}
]
}

View File

@@ -0,0 +1,298 @@
#!/usr/bin/env python
"""
NetAlertX plugin: PIHOLEAPI
Imports devices from Pi-hole v6 API (Network endpoints) into NetAlertX plugin results.
"""
import os
import sys
import datetime
import requests
import json
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# --- NetAlertX plugin bootstrap (match example) ---
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
pluginName = 'PIHOLEAPI'
from plugin_helper import Plugin_Objects, is_mac # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Setup timezone & logger using standard NAX helpers
conf.tz = timezone(get_setting_value('TIMEZONE'))
Logger(get_setting_value('LOG_LEVEL'))
LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
plugin_objects = Plugin_Objects(RESULT_FILE)
# --- Global state for session ---
PIHOLEAPI_URL = None
PIHOLEAPI_PASSWORD = None
PIHOLEAPI_SES_VALID = False
PIHOLEAPI_SES_SID = None
PIHOLEAPI_SES_CSRF = None
PIHOLEAPI_API_MAXCLIENTS = None
PIHOLEAPI_VERIFY_SSL = True
PIHOLEAPI_RUN_TIMEOUT = 10
VERSION_DATE = "NAX-PIHOLEAPI-1.0"
# ------------------------------------------------------------------
def pihole_api_auth():
"""Authenticate to Pi-hole v6 API and populate session globals."""
global PIHOLEAPI_SES_VALID, PIHOLEAPI_SES_SID, PIHOLEAPI_SES_CSRF
if not PIHOLEAPI_URL:
mylog('none', [f'[{pluginName}] PIHOLEAPI_URL not configured — skipping.'])
return False
# handle SSL verification setting - disable insecure warnings only when PIHOLEAPI_VERIFY_SSL=False
if not PIHOLEAPI_VERIFY_SSL:
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
headers = {
"accept": "application/json",
"content-type": "application/json",
"User-Agent": "NetAlertX/" + VERSION_DATE
}
data = {"password": PIHOLEAPI_PASSWORD}
try:
resp = requests.post(PIHOLEAPI_URL + 'api/auth', headers=headers, json=data, verify=PIHOLEAPI_VERIFY_SSL, timeout=PIHOLEAPI_RUN_TIMEOUT)
resp.raise_for_status()
except requests.exceptions.Timeout:
mylog('none', [f'[{pluginName}] Pi-hole auth request timed out. Try increasing PIHOLEAPI_RUN_TIMEOUT.'])
return False
except requests.exceptions.ConnectionError:
mylog('none', [f'[{pluginName}] Connection error during Pi-hole auth. Check PIHOLEAPI_URL and PIHOLEAPI_PASSWORD'])
return False
except Exception as e:
mylog('none', [f'[{pluginName}] Unexpected auth error: {e}'])
return False
try:
response_json = resp.json()
except Exception:
mylog('none', [f'[{pluginName}] Unable to parse Pi-hole auth response JSON.'])
return False
session_data = response_json.get('session', {})
if session_data.get('valid', False):
PIHOLEAPI_SES_VALID = True
PIHOLEAPI_SES_SID = session_data.get('sid')
# csrf might not be present if no password set
PIHOLEAPI_SES_CSRF = session_data.get('csrf')
mylog('verbose', [f'[{pluginName}] Authenticated to Pi-hole (sid present).'])
return True
else:
mylog('none', [f'[{pluginName}] Pi-hole auth required or failed.'])
return False
# ------------------------------------------------------------------
def pihole_api_deauth():
"""Logout from Pi-hole v6 API (best-effort)."""
global PIHOLEAPI_SES_VALID, PIHOLEAPI_SES_SID, PIHOLEAPI_SES_CSRF
if not PIHOLEAPI_URL:
return
if not PIHOLEAPI_SES_SID:
return
headers = {"X-FTL-SID": PIHOLEAPI_SES_SID}
try:
requests.delete(PIHOLEAPI_URL + 'api/auth', headers=headers, verify=PIHOLEAPI_VERIFY_SSL, timeout=PIHOLEAPI_RUN_TIMEOUT)
except Exception:
# ignore errors on logout
pass
PIHOLEAPI_SES_VALID = False
PIHOLEAPI_SES_SID = None
PIHOLEAPI_SES_CSRF = None
# ------------------------------------------------------------------
def get_pihole_interface_data():
"""Return dict mapping mac -> [ipv4 addresses] from Pi-hole interfaces endpoint."""
result = {}
if not PIHOLEAPI_SES_VALID:
return result
headers = {"X-FTL-SID": PIHOLEAPI_SES_SID}
if PIHOLEAPI_SES_CSRF:
headers["X-FTL-CSRF"] = PIHOLEAPI_SES_CSRF
try:
resp = requests.get(PIHOLEAPI_URL + 'api/network/interfaces', headers=headers, verify=PIHOLEAPI_VERIFY_SSL, timeout=PIHOLEAPI_RUN_TIMEOUT)
resp.raise_for_status()
data = resp.json()
except Exception as e:
mylog('none', [f'[{pluginName}] Failed to fetch Pi-hole interfaces: {e}'])
return result
for interface in data.get('interfaces', []):
mac_address = interface.get('address')
if not mac_address or mac_address == "00:00:00:00:00:00":
continue
addrs = []
for addr in interface.get('addresses', []):
if addr.get('family') == 'inet':
a = addr.get('address')
if a:
addrs.append(a)
if addrs:
result[mac_address] = addrs
return result
# ------------------------------------------------------------------
def get_pihole_network_devices():
"""Return list of devices from Pi-hole v6 API (devices endpoint)."""
devices = []
# return empty list if no session available
if not PIHOLEAPI_SES_VALID:
return devices
# prepare headers
headers = {"X-FTL-SID": PIHOLEAPI_SES_SID}
if PIHOLEAPI_SES_CSRF:
headers["X-FTL-CSRF"] = PIHOLEAPI_SES_CSRF
params = {
'max_devices': str(PIHOLEAPI_API_MAXCLIENTS),
'max_addresses': '2'
}
try:
resp = requests.get(PIHOLEAPI_URL + 'api/network/devices', headers=headers, params=params, verify=PIHOLEAPI_VERIFY_SSL, timeout=PIHOLEAPI_RUN_TIMEOUT)
resp.raise_for_status()
data = resp.json()
mylog('debug', [f'[{pluginName}] Pi-hole API returned data: {json.dumps(data)}'])
except Exception as e:
mylog('none', [f'[{pluginName}] Failed to fetch Pi-hole devices: {e}'])
return devices
# The API returns 'devices' list
return data.get('devices', [])
# ------------------------------------------------------------------
def gather_device_entries():
"""
Build a list of device entries suitable for Plugin_Objects.add_object.
Each entry is a dict with: mac, ip, name, macVendor, lastQuery
"""
entries = []
iface_map = get_pihole_interface_data()
devices = get_pihole_network_devices()
now_ts = int(datetime.datetime.now().timestamp())
for device in devices:
hwaddr = device.get('hwaddr')
if not hwaddr or hwaddr == "00:00:00:00:00:00":
continue
macVendor = device.get('macVendor', '')
lastQuery = device.get('lastQuery')
# 'ips' is a list of dicts: {ip, name}
for ip_info in device.get('ips', []):
ip = ip_info.get('ip')
if not ip:
continue
name = ip_info.get('name') or '(unknown)'
# mark active if ip present on local interfaces
for mac, iplist in iface_map.items():
if ip in iplist:
lastQuery = str(now_ts)
entries.append({
'mac': hwaddr.lower(),
'ip': ip,
'name': name,
'macVendor': macVendor,
'lastQuery': str(lastQuery) if lastQuery is not None else ''
})
return entries
# ------------------------------------------------------------------
def main():
"""Main plugin entrypoint."""
global PIHOLEAPI_URL, PIHOLEAPI_PASSWORD, PIHOLEAPI_API_MAXCLIENTS, PIHOLEAPI_VERIFY_SSL, PIHOLEAPI_RUN_TIMEOUT
mylog('verbose', [f'[{pluginName}] start script.'])
# Load settings from NAX config
PIHOLEAPI_URL = get_setting_value('PIHOLEAPI_URL')
# ensure trailing slash
if not PIHOLEAPI_URL.endswith('/'):
PIHOLEAPI_URL += '/'
PIHOLEAPI_PASSWORD = get_setting_value('PIHOLEAPI_PASSWORD')
PIHOLEAPI_API_MAXCLIENTS = get_setting_value('PIHOLEAPI_API_MAXCLIENTS')
# Accept boolean or string "True"/"False"
PIHOLEAPI_VERIFY_SSL = get_setting_value('PIHOLEAPI_SSL_VERIFY')
PIHOLEAPI_RUN_TIMEOUT = get_setting_value('PIHOLEAPI_RUN_TIMEOUT')
# Authenticate
if not pihole_api_auth():
mylog('none', [f'[{pluginName}] Authentication failed — no devices imported.'])
return 1
try:
device_entries = gather_device_entries()
if not device_entries:
mylog('verbose', [f'[{pluginName}] No devices found on Pi-hole.'])
else:
for entry in device_entries:
if is_mac(entry['mac']):
# Map to Plugin_Objects fields
mylog('verbose', [f"[{pluginName}] found: {entry['name']}|{entry['mac']}|{entry['ip']}"])
plugin_objects.add_object(
primaryId=str(entry['mac']),
secondaryId=str(entry['ip']),
watched1=str(entry['name']),
watched2=str(entry['macVendor']),
watched3=str(entry['lastQuery']),
watched4="",
extra=pluginName,
foreignKey=str(entry['mac'])
)
else:
mylog('verbose', [f"[{pluginName}] Skipping invalid MAC: {entry['name']}|{entry['mac']}|{entry['ip']}"])
# Write result file for NetAlertX to ingest
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Script finished. Imported {len(device_entries)} entries.'])
finally:
# Deauth best-effort
pihole_api_deauth()
return 0
if __name__ == '__main__':
main()

View File

@@ -5,18 +5,18 @@ import os
import re
import base64
import json
from datetime import datetime
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.append(f"{INSTALL_PATH}/front/plugins")
sys.path.append(f'{INSTALL_PATH}/server')
from logger import mylog, Logger
from utils.datetime_utils import timeNowDB
from const import default_tz, fullConfPath
from logger import mylog # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from const import default_tz, fullConfPath # noqa: E402 [flake8 lint suppression]
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def read_config_file():
"""
retuns dict on the config file key:value pairs
@@ -25,13 +25,13 @@ def read_config_file():
filename = fullConfPath
print('[plugin_helper] reading config file')
# load the variables from .conf
with open(filename, "r") as file:
code = compile(file.read(), filename, "exec")
confDict = {} # config dictionary
confDict = {} # config dictionary
exec(code, {"__builtins__": {}}, confDict)
return confDict
@@ -42,6 +42,7 @@ if timeZoneSetting not in all_timezones:
timeZoneSetting = default_tz
timeZone = pytz.timezone(timeZoneSetting)
# -------------------------------------------------------------------
# Sanitizes plugin output
def handleEmpty(input):
@@ -55,6 +56,7 @@ def handleEmpty(input):
input = input.replace('\n', '') # Removing new lines
return input
# -------------------------------------------------------------------
# Sanitizes string
def rmBadChars(input):
@@ -64,23 +66,25 @@ def rmBadChars(input):
return input
# -------------------------------------------------------------------
# check if this is a router IP
def is_typical_router_ip(ip_address):
# List of common default gateway IP addresses
common_router_ips = [
"192.168.0.1", "192.168.1.1", "192.168.1.254", "192.168.0.254",
"10.0.0.1", "10.1.1.1", "192.168.2.1", "192.168.10.1", "192.168.11.1",
"192.168.100.1", "192.168.101.1", "192.168.123.254", "192.168.223.1",
"192.168.31.1", "192.168.8.1", "192.168.254.254", "192.168.50.1",
"192.168.3.1", "192.168.4.1", "192.168.5.1", "192.168.9.1",
"192.168.15.1", "192.168.16.1", "192.168.20.1", "192.168.30.1",
"192.168.42.1", "192.168.62.1", "192.168.178.1", "192.168.1.1",
"192.168.1.254", "192.168.0.1", "192.168.0.10", "192.168.0.100",
"192.168.0.254"
]
# List of common default gateway IP addresses
common_router_ips = [
"192.168.0.1", "192.168.1.1", "192.168.1.254", "192.168.0.254",
"10.0.0.1", "10.1.1.1", "192.168.2.1", "192.168.10.1", "192.168.11.1",
"192.168.100.1", "192.168.101.1", "192.168.123.254", "192.168.223.1",
"192.168.31.1", "192.168.8.1", "192.168.254.254", "192.168.50.1",
"192.168.3.1", "192.168.4.1", "192.168.5.1", "192.168.9.1",
"192.168.15.1", "192.168.16.1", "192.168.20.1", "192.168.30.1",
"192.168.42.1", "192.168.62.1", "192.168.178.1", "192.168.1.1",
"192.168.1.254", "192.168.0.1", "192.168.0.10", "192.168.0.100",
"192.168.0.254"
]
return ip_address in common_router_ips
return ip_address in common_router_ips
# -------------------------------------------------------------------
# Check if a valid MAC address
@@ -94,6 +98,7 @@ def is_mac(input):
return isMac
# -------------------------------------------------------------------
def decodeBase64(inputParamBase64):
@@ -102,14 +107,11 @@ def decodeBase64(inputParamBase64):
print('[Plugins] Helper base64 input: ')
print(input)
# Extract the base64-encoded subnet information from the first element
# The format of the element is assumed to be like 'param=b<base64-encoded-data>'.
# Printing the extracted base64-encoded information.
mylog('debug', ['[Plugins] Helper base64 inputParamBase64: ', inputParamBase64])
# Decode the base64-encoded subnet information to get the actual subnet information in ASCII format.
result = base64.b64decode(inputParamBase64).decode('ascii')
@@ -118,6 +120,7 @@ def decodeBase64(inputParamBase64):
return result
# -------------------------------------------------------------------
def decode_settings_base64(encoded_str, convert_types=True):
"""
@@ -180,6 +183,7 @@ def normalize_mac(mac):
return normalized_mac
# -------------------------------------------------------------------
class Plugin_Object:
"""
@@ -243,6 +247,7 @@ class Plugin_Object:
)
return line
class Plugin_Objects:
"""
Plugin_Objects is the class that manages and holds all the objects created by the plugin.
@@ -303,7 +308,3 @@ class Plugin_Objects:
def __len__(self):
return len(self.objects)

View File

@@ -10,12 +10,12 @@ import sys
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, handleEmpty, normalize_mac
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects, handleEmpty, normalize_mac # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -28,7 +28,6 @@ pluginName = "SNMPDSC"
LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
# Workflow
def main():
mylog('verbose', ['[SNMPDSC] In script '])
@@ -36,9 +35,13 @@ def main():
# init global variables
global snmpWalkCmds
parser = argparse.ArgumentParser(description='This plugin is used to discover devices via the arp table(s) of a RFC1213 compliant router or switch.')
parser.add_argument('routers', action="store", help="IP(s) of routers, separated by comma (,) if passing multiple")
parser.add_argument(
'routers',
action="store",
help="IP(s) of routers, separated by comma (,) if passing multiple"
)
values = parser.parse_args()
timeoutSetting = get_setting_value("SNMPDSC_RUN_TIMEOUT")
@@ -46,8 +49,7 @@ def main():
plugin_objects = Plugin_Objects(RESULT_FILE)
if values.routers:
snmpWalkCmds = values.routers.split('=')[1].replace('\'','')
snmpWalkCmds = values.routers.split('=')[1].replace('\'', '')
if ',' in snmpWalkCmds:
commands = snmpWalkCmds.split(',')
@@ -63,7 +65,12 @@ def main():
probes = 1 # N probes
for _ in range(probes):
output = subprocess.check_output (snmpwalkArgs, universal_newlines=True, stderr=subprocess.STDOUT, timeout=(timeoutSetting))
output = subprocess.check_output(
snmpwalkArgs,
universal_newlines=True,
stderr=subprocess.STDOUT,
timeout=(timeoutSetting)
)
mylog('verbose', ['[SNMPDSC] output: ', output])
@@ -86,7 +93,7 @@ def main():
plugin_objects.add_object(
primaryId = handleEmpty(macAddress),
secondaryId = handleEmpty(ipAddress.strip()), # Remove leading/trailing spaces from IP
secondaryId = handleEmpty(ipAddress.strip()), # Remove leading/trailing spaces from IP
watched1 = '(unknown)',
watched2 = handleEmpty(snmpwalkArgs[6]), # router IP
extra = handleEmpty(line),
@@ -95,7 +102,6 @@ def main():
else:
mylog('verbose', ['[SNMPDSC] ipStr does not seem to contain a valid IP:', ipStr])
elif line.startswith('ipNetToMediaPhysAddress'):
# Format: snmpwalk -OXsq output
parts = line.split()
@@ -120,7 +126,6 @@ def main():
plugin_objects.write_result_file()
# BEGIN
if __name__ == '__main__':
main()

View File

@@ -12,16 +12,16 @@ import base64
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from utils.plugin_utils import get_plugins_configs, decode_and_rename_files
from logger import mylog, Logger
from const import fullDbPath, logPath
from helper import get_setting_value
from utils.datetime_utils import timeNowDB
from utils.crypto_utils import encrypt_data
from messaging.in_app import write_notification
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from utils.plugin_utils import get_plugins_configs, decode_and_rename_files # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import fullDbPath, logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from utils.datetime_utils import timeNowDB # noqa: E402 [flake8 lint suppression]
from utils.crypto_utils import encrypt_data # noqa: E402 [flake8 lint suppression]
from messaging.in_app import write_notification # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -102,7 +102,6 @@ def main():
else:
mylog('verbose', [f'[{pluginName}] {file_path} not found'])
# PUSHING/SENDING devices
if send_devices:
@@ -150,7 +149,6 @@ def main():
if lggr.isAbove('verbose'):
write_notification(message, 'info', timeNowDB())
# Process any received data for the Device DB table (ONLY JSON)
# Create the file path
@@ -210,7 +208,6 @@ def main():
cursor.execute(f'SELECT devMac FROM Devices WHERE devMac IN ({placeholders})', tuple(unique_mac_addresses))
existing_mac_addresses = set(row[0] for row in cursor.fetchall())
# insert devices into the last_result.log and thus CurrentScan table to manage state
for device in device_data:
# only insert devices taht were online and skip the root node to prevent IP flipping on the hub
@@ -258,7 +255,6 @@ def main():
mylog('verbose', [message])
write_notification(message, 'info', timeNowDB())
# Commit and close the connection
conn.commit()
conn.close()
@@ -268,6 +264,7 @@ def main():
return 0
# ------------------------------------------------------------------
# Data retrieval methods
api_endpoints = [
@@ -275,6 +272,7 @@ api_endpoints = [
"/plugins/sync/hub.php" # Legacy PHP endpoint
]
# send data to the HUB
def send_data(api_token, file_content, encryption_key, file_path, node_name, pref, hub_url):
"""Send encrypted data to HUB, preferring /sync endpoint and falling back to PHP version."""
@@ -345,6 +343,5 @@ def get_data(api_token, node_url):
return ""
if __name__ == '__main__':
main()

View File

@@ -10,12 +10,11 @@ from unifi_sm_api.api import SiteManagerAPI
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, decode_settings_base64
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
import conf
from plugin_helper import Plugin_Objects, decode_settings_base64 # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -50,11 +49,11 @@ def main():
mylog('none', [f'[{pluginName}] Connecting to: {siteDict["UNIFIAPI_site_name"]}'])
api = SiteManagerAPI(
api_key=siteDict["UNIFIAPI_api_key"],
version=siteDict["UNIFIAPI_api_version"],
base_url=siteDict["UNIFIAPI_base_url"],
verify_ssl=siteDict["UNIFIAPI_verify_ssl"]
)
api_key=siteDict["UNIFIAPI_api_key"],
version=siteDict["UNIFIAPI_api_version"],
base_url=siteDict["UNIFIAPI_base_url"],
verify_ssl=siteDict["UNIFIAPI_verify_ssl"]
)
sites_resp = api.get_sites()
sites = sites_resp.get("data", [])
@@ -69,16 +68,16 @@ def main():
# insert devices into the lats_result.log
for device in device_data:
plugin_objects.add_object(
primaryId = device['dev_mac'], # mac
secondaryId = device['dev_ip'], # IP
watched1 = device['dev_name'], # name
watched2 = device['dev_type'], # device_type (AP/Switch etc)
watched3 = device['dev_connected'], # connectedAt or empty
watched4 = device['dev_parent_mac'],# parent_mac or "Internet"
extra = '',
foreignKey = device['dev_mac']
)
plugin_objects.add_object(
primaryId = device['dev_mac'], # mac
secondaryId = device['dev_ip'], # IP
watched1 = device['dev_name'], # name
watched2 = device['dev_type'], # device_type (AP/Switch etc)
watched3 = device['dev_connected'], # connectedAt or empty
watched4 = device['dev_parent_mac'], # parent_mac or "Internet"
extra = '',
foreignKey = device['dev_mac']
)
mylog('verbose', [f'[{pluginName}] New entries: "{len(device_data)}"'])
@@ -87,6 +86,7 @@ def main():
return 0
# retrieve data
def get_device_data(site, api):
device_data = []
@@ -146,8 +146,8 @@ def get_device_data(site, api):
dev_parent_mac = resolve_parent_mac(uplinkDeviceId)
device_data.append({
"dev_mac": dev_mac,
"dev_ip": dev_ip,
"dev_mac": dev_mac,
"dev_ip": dev_ip,
"dev_name": dev_name,
"dev_type": dev_type,
"dev_connected": dev_connected,

View File

@@ -14,12 +14,12 @@ from pyunifi.controller import Controller
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, rmBadChars, is_typical_router_ip, is_mac
from logger import mylog, Logger
from helper import get_setting_value, normalize_string
import conf
from pytz import timezone
from const import logPath
from plugin_helper import Plugin_Objects, rmBadChars, is_typical_router_ip, is_mac # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value, normalize_string # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -37,15 +37,10 @@ LOCK_FILE = os.path.join(LOG_PATH, f'full_run.{pluginName}.lock')
urllib3.disable_warnings(InsecureRequestWarning)
# Workflow
def main():
mylog('verbose', [f'[{pluginName}] In script'])
# init global variables
global UNIFI_USERNAME, UNIFI_PASSWORD, UNIFI_HOST, UNIFI_SITES, PORT, VERIFYSSL, VERSION, FULL_IMPORT
@@ -65,11 +60,10 @@ def main():
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Scan finished, found {len(plugin_objects)} devices'])
# .............................................
# .............................................
def get_entries(plugin_objects: Plugin_Objects) -> Plugin_Objects:
global VERIFYSSL
@@ -79,7 +73,6 @@ def get_entries(plugin_objects: Plugin_Objects) -> Plugin_Objects:
mylog('verbose', [f'[{pluginName}] sites: {UNIFI_SITES}'])
if (VERIFYSSL.upper() == "TRUE"):
VERIFYSSL = True
else:
@@ -114,7 +107,7 @@ def get_entries(plugin_objects: Plugin_Objects) -> Plugin_Objects:
plugin_objects=plugin_objects,
device_label='client',
device_vendor="",
force_import=True # These are online clients, force import
force_import=True # These are online clients, force import
)
mylog('verbose', [f'[{pluginName}] Found {len(plugin_objects)} Online Devices'])
@@ -154,11 +147,9 @@ def get_entries(plugin_objects: Plugin_Objects) -> Plugin_Objects:
mylog('verbose', [f'[{pluginName}] Found {len(plugin_objects)} Users'])
mylog('verbose', [f'[{pluginName}] check if Lock file needs to be modified'])
set_lock_file_value(FULL_IMPORT, lock_file_value)
mylog('verbose', [f'[{pluginName}] Found {len(plugin_objects)} Clients overall'])
return plugin_objects
@@ -185,7 +176,7 @@ def collect_details(device_type, devices, online_macs, processed_macs, plugin_ob
parentMac = 'Internet'
# Add object only if not processed
if macTmp not in processed_macs and ( status == 1 or force_import is True ):
if macTmp not in processed_macs and (status == 1 or force_import is True):
plugin_objects.add_object(
primaryId=macTmp,
secondaryId=ipTmp,
@@ -204,6 +195,7 @@ def collect_details(device_type, devices, online_macs, processed_macs, plugin_ob
else:
mylog('verbose', [f'[{pluginName}] Skipping, not a valid MAC address: {macTmp}'])
# -----------------------------------------------------------------------------
def get_unifi_val(obj, key, default='null'):
if isinstance(obj, dict):
@@ -212,7 +204,7 @@ def get_unifi_val(obj, key, default='null'):
for k, v in obj.items():
if isinstance(v, dict):
result = get_unifi_val(v, key, default)
if result not in ['','None', None, 'null']:
if result not in ['', 'None', None, 'null']:
return result
mylog('trace', [f'[{pluginName}] Value not found for key "{key}" in obj "{json.dumps(obj)}"'])
@@ -226,6 +218,7 @@ def get_name(*names: str) -> str:
return rmBadChars(name)
return 'null'
# -----------------------------------------------------------------------------
def get_parent_mac(*macs: str) -> str:
for mac in macs:
@@ -233,6 +226,7 @@ def get_parent_mac(*macs: str) -> str:
return mac
return 'null'
# -----------------------------------------------------------------------------
def get_port(*ports: str) -> str:
for port in ports:
@@ -240,12 +234,6 @@ def get_port(*ports: str) -> str:
return port
return 'null'
# -----------------------------------------------------------------------------
def get_port(*macs: str) -> str:
for mac in macs:
if mac and mac != 'null':
return mac
return 'null'
# -----------------------------------------------------------------------------
def get_ip(*ips: str) -> str:
@@ -271,7 +259,7 @@ def set_lock_file_value(config_value: str, lock_file_value: bool) -> None:
mylog('verbose', [f'[{pluginName}] Setting lock value for "full import" to {out}'])
with open(LOCK_FILE, 'w') as lock_file:
lock_file.write(str(out))
lock_file.write(str(out))
# -----------------------------------------------------------------------------
@@ -286,15 +274,16 @@ def read_lock_file() -> bool:
# -----------------------------------------------------------------------------
def check_full_run_state(config_value: str, lock_file_value: bool) -> bool:
if config_value == 'always' or (config_value == 'once' and lock_file_value == False):
if config_value == 'always' or (config_value == 'once' and lock_file_value is False):
mylog('verbose', [f'[{pluginName}] Full import needs to be done: config_value: {config_value} and lock_file_value: {lock_file_value}'])
return True
else:
mylog('verbose', [f'[{pluginName}] Full import NOT needed: config_value: {config_value} and lock_file_value: {lock_file_value}'])
return False
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -9,13 +9,13 @@ import sqlite3
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects, handleEmpty
from logger import mylog, Logger
from helper import get_setting_value
from const import logPath, applicationPath, fullDbPath
from scan.device_handling import query_MAC_vendor
import conf
from pytz import timezone
from plugin_helper import Plugin_Objects, handleEmpty # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath, applicationPath, fullDbPath # noqa: E402 [flake8 lint suppression]
from scan.device_handling import query_MAC_vendor # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -25,11 +25,11 @@ Logger(get_setting_value('LOG_LEVEL'))
pluginName = 'VNDRPDT'
LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main():
mylog('verbose', ['[VNDRPDT] In script'])
@@ -48,9 +48,10 @@ def main():
return 0
#===============================================================================
# ===============================================================================
# Update device vendors database
#===============================================================================
# ===============================================================================
def update_vendor_database():
# Update vendors DB (iab oui)
@@ -60,30 +61,29 @@ def update_vendor_database():
# Execute command
try:
# try runnning a subprocess safely
update_output = subprocess.check_output (update_args)
subprocess.check_output(update_args)
except subprocess.CalledProcessError as e:
# An error occured, handle it
mylog('verbose', [' FAILED: Updating vendors DB, set LOG_LEVEL=debug for more info'])
mylog('verbose', [e.output])
# ------------------------------------------------------------------------------
# resolve missing vendors
def update_vendors (dbPath, plugin_objects):
def update_vendors(dbPath, plugin_objects):
# Connect to the App SQLite database
conn = sqlite3.connect(dbPath)
sql = conn.cursor()
# Initialize variables
recordsToUpdate = []
ignored = 0
notFound = 0
mylog('verbose', [' Searching devices vendor'])
# Get devices without a vendor
sql.execute ("""SELECT
sql.execute("""SELECT
devMac,
devLastIP,
devName,
@@ -103,7 +103,7 @@ def update_vendors (dbPath, plugin_objects):
# All devices loop
for device in devices:
# Search vendor in HW Vendors DB
vendor = query_MAC_vendor (device[0])
vendor = query_MAC_vendor(device[0])
if vendor == -1 :
notFound += 1
elif vendor == -2 :
@@ -124,15 +124,13 @@ def update_vendors (dbPath, plugin_objects):
mylog('verbose', [" Devices Ignored : ", ignored])
mylog('verbose', [" Devices with missing vendor : ", len(devices)])
mylog('verbose', [" Vendors Not Found : ", notFound])
mylog('verbose', [" Vendors updated : ", len(plugin_objects) ])
mylog('verbose', [" Vendors updated : ", len(plugin_objects)])
return plugin_objects
#===============================================================================
# ===============================================================================
# BEGIN
#===============================================================================
# ===============================================================================
if __name__ == '__main__':
main()

View File

@@ -9,13 +9,13 @@ from wakeonlan import send_magic_packet
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
from database import DB
from models.device_instance import DeviceInstance
import conf
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -34,7 +34,6 @@ RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
plugin_objects = Plugin_Objects(RESULT_FILE)
def main():
mylog('none', [f'[{pluginName}] In script'])
@@ -95,6 +94,7 @@ def main():
return 0
# wake
def execute(port, ip, mac, name):
@@ -113,5 +113,6 @@ def execute(port, ip, mac, name):
# Return the data result
return result
if __name__ == '__main__':
main()

View File

@@ -12,12 +12,12 @@ from urllib3.exceptions import InsecureRequestWarning
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Objects
from const import logPath
from helper import get_setting_value
import conf
from pytz import timezone
from logger import mylog, Logger
from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -30,16 +30,15 @@ pluginName = 'WEBMON'
LOG_PATH = logPath + '/plugins'
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
mylog('verbose', [f'[{pluginName}] In script'])
def main():
values = get_setting_value('WEBMON_urls_to_check')
mylog('verbose', [f'[{pluginName}] Checking URLs: {values}'])
if len(values) > 0:
plugin_objects = Plugin_Objects(RESULT_FILE)
# plugin_objects = service_monitoring(values.urls.split('=')[1].split(','), plugin_objects)
@@ -48,6 +47,7 @@ def main():
else:
return
def check_services_health(site):
mylog('verbose', [f'[{pluginName}] Checking {site}'])
@@ -79,6 +79,7 @@ def check_services_health(site):
return status, latency
def service_monitoring(urls, plugin_objects):
for site in urls:
status, latency = check_services_health(site)
@@ -94,7 +95,6 @@ def service_monitoring(urls, plugin_objects):
)
return plugin_objects
if __name__ == '__main__':
sys.exit(main())

View File

@@ -44,3 +44,4 @@ More Info:
Report Date: 2021-12-08 12:30
Server: Synology-NAS
Link: netalertx.com

View File

@@ -1,12 +1,3 @@
<!--
#---------------------------------------------------------------------------------#
# NetAlertX #
# Open Source Network Guard / WIFI & LAN intrusion detector #
# #
# report_template.html - Back module. Template to email reporting in HTML format #
#---------------------------------------------------------------------------------#
-->
<html>
<head></head>
<body>
@@ -20,11 +11,11 @@
</tr>
<tr>
<td height=200 valign=top style="padding: 10px">
<NEW_DEVICES_TABLE>
<DOWN_DEVICES_TABLE>
<DOWN_RECONNECTED_TABLE>
<EVENTS_TABLE>
<PLUGINS_TABLE>
NEW_DEVICES_TABLE
DOWN_DEVICES_TABLE
DOWN_RECONNECTED_TABLE
EVENTS_TABLE
PLUGINS_TABLE
</td>
</tr>
@@ -34,11 +25,11 @@
<table width=100% bgcolor=#3c8dbc cellpadding=5px cellspacing=0 style="font-size: 10px; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px;">
<tr>
<td width=50% style="text-align:center;color: white;" bgcolor="#3c8dbc">
<NEW_VERSION>
| Sent: <REPORT_DATE>
| Server: <SERVER_NAME>
| Built: <BUILD_DATE>
| Version: <BUILD_VERSION>
NEW_VERSION
| Sent: REPORT_DATE
| Server: <a href="REPORT_DASHBOARD_URL" target="_blank" style="color:#ffffff;">SERVER_NAME</a>
| Built: BUILD_DATE
| Version: BUILD_VERSION
</td>
</tr>
</table>

View File

@@ -1,9 +1,10 @@
<NEW_DEVICES_TABLE>
<DOWN_DEVICES_TABLE>
<DOWN_RECONNECTED_TABLE>
<EVENTS_TABLE>
<PLUGINS_TABLE>
NEW_DEVICES_TABLE
DOWN_DEVICES_TABLE
DOWN_RECONNECTED_TABLE
EVENTS_TABLE
PLUGINS_TABLE
Report Date: <REPORT_DATE>
Server: <SERVER_NAME>
<NEW_VERSION>
Report Date: REPORT_DATE
Server: SERVER_NAME
Link: REPORT_DASHBOARD_URL
NEW_VERSION

View File

@@ -24,7 +24,7 @@ apt-get install sudo -y
apt-get install -y git
# Clean the directory
rm -R $INSTALL_DIR/
rm -R ${INSTALL_DIR:?}/
# Clone the application repository
git clone https://github.com/jokob-sk/NetAlertX "$INSTALL_DIR/"

View File

@@ -34,6 +34,8 @@ sudo phpenmod -v 8.2 sqlite3
# setup virtual python environment so we can use pip3 to install packages
apt-get install python3-venv -y
python3 -m venv /opt/venv
# Shell check doesn't recognize source command because it's not in the repo, it is in the system at runtime
# shellcheck disable=SC1091
source /opt/venv/bin/activate
update-alternatives --install /usr/bin/python python /usr/bin/python3 10

View File

@@ -175,6 +175,8 @@ nginx -t || { echo "[INSTALL] nginx config test failed"; exit 1; }
# sudo systemctl restart nginx
# Activate the virtual python environment
# Shell check doesn't recognize source command because it's not in the repo, it is in the system at runtime
# shellcheck disable=SC1091
source /opt/venv/bin/activate
echo "[INSTALL] 🚀 Starting app - navigate to your <server IP>:${PORT}"

View File

@@ -0,0 +1,5 @@
#!/bin/bash
echo "Initializing cron..."
# Placeholder for cron initialization commands
echo "cron initialized."

View File

@@ -1,4 +0,0 @@
#!/bin/bash
echo "Initializing crond..."
#Future crond initializations can go here.
echo "crond initialized."

View File

@@ -1,4 +1,4 @@
#!/bin/bash
echo "Initializing nginx..."
install -d -o netalertx -g netalertx -m 700 ${SYSTEM_SERVICES_RUN_TMP}/client_body;
install -d -o netalertx -g netalertx -m 700 "${SYSTEM_SERVICES_RUN_TMP}/client_body";
echo "nginx initialized."

View File

@@ -1,7 +1,4 @@
#!/bin/bash
echo "Initializing php-fpm..."
# Set up PHP-FPM directories and socket configuration
install -d -o netalertx -g netalertx /services/config/run
echo "php-fpm initialized."

View File

@@ -51,12 +51,13 @@ if [ "$(id -u)" -eq 0 ]; then
EOF
>&2 printf "%s" "${RESET}"
# Set ownership to netalertx user for all read-write paths
chown -R netalertx ${READ_WRITE_PATHS} 2>/dev/null || true
# Set directory and file permissions for all read-write paths
find ${READ_WRITE_PATHS} -type d -exec chmod u+rwx {} \;
find ${READ_WRITE_PATHS} -type f -exec chmod u+rw {} \;
# Set ownership and permissions for each read-write path individually
printf '%s\n' "${READ_WRITE_PATHS}" | while IFS= read -r path; do
[ -n "${path}" ] || continue
chown -R netalertx "${path}" 2>/dev/null || true
find "${path}" -type d -exec chmod u+rwx {} \;
find "${path}" -type f -exec chmod u+rw {} \;
done
echo Permissions fixed for read-write paths. Please restart the container as user 20211.
sleep infinity & wait $!
fi

View File

@@ -16,11 +16,11 @@ LEGACY_DB=/app/db
MARKER_NAME=.migration
is_mounted() {
local path="$1"
if [ ! -d "${path}" ]; then
my_path="$1"
if [ ! -d "${my_path}" ]; then
return 1
fi
mountpoint -q "${path}" 2>/dev/null
mountpoint -q "${my_path}" 2>/dev/null
}
warn_unmount_legacy() {

View File

@@ -2,7 +2,7 @@
# first-run-check.sh - Checks and initializes configuration files on first run
# Check for app.conf and deploy if required
if [ ! -f ${NETALERTX_CONFIG}/app.conf ]; then
if [ ! -f "${NETALERTX_CONFIG}/app.conf" ]; then
mkdir -p "${NETALERTX_CONFIG}" || {
>&2 echo "ERROR: Failed to create config directory ${NETALERTX_CONFIG}"
exit 1

Some files were not shown because too many files have changed in this diff Show More