Compare commits

...

32 Commits

Author SHA1 Message Date
Jokob @NetAlertX
5d1c63375b MCP refactor
Signed-off-by: GitHub <noreply@github.com>
2025-12-07 08:37:55 +00:00
Jokob @NetAlertX
8c982cd476 MCP refactor
Signed-off-by: GitHub <noreply@github.com>
2025-12-07 08:20:51 +00:00
Jokob @NetAlertX
36e5751221 Merge branch 'main' into fix-pr-1309 2025-12-01 09:34:59 +00:00
mid
5af760f5ee Translated using Weblate (Japanese)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (763 of 763 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ja/
2025-12-01 10:00:26 +01:00
Jokob @NetAlertX
dfd836527e api endpoints updates 2025-12-01 08:52:50 +00:00
Jokob @NetAlertX
8d5a663817 DevInstance and PluginObjectInstance expansion 2025-12-01 08:27:14 +00:00
jokob-sk
fbb4a2f8b4 BE: added /auth endpoint
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-01 09:24:44 +11:00
jokob-sk
54bce6505b PLG: SNMPDSC Fortinet support #1324
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-01 09:11:23 +11:00
jokob-sk
6da47cc830 DOCS: migration docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-01 08:32:22 +11:00
jokob-sk
9cabbf3622 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-12-01 08:03:28 +11:00
jokob-sk
6c28a08bee FE: YYYY-DD-MM timestamp handling #1312
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-01 08:03:13 +11:00
Sylvain Pichon
86e3decd4e Translated using Weblate (French)
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Currently translated at 100.0% (763 of 763 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/fr/
2025-11-30 08:01:30 +00:00
Safeguard
e14e0bb9e8 Translated using Weblate (Russian)
Currently translated at 100.0% (763 of 763 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ru/
2025-11-30 08:01:28 +00:00
mid
b6023d1373 Translated using Weblate (Japanese)
Currently translated at 88.8% (678 of 763 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/ja/
2025-11-30 08:01:24 +00:00
Максим Горпиніч
1812cc8ef8 Translated using Weblate (Ukrainian)
Currently translated at 100.0% (763 of 763 strings)

Translation: NetAlertX/core
Translate-URL: https://hosted.weblate.org/projects/pialert/core/uk/
2025-11-30 08:00:21 +00:00
Adam Outler
e64c490c8a Help ARM runners on github with rust and cargo required by pip 2025-11-30 01:04:12 +00:00
jokob-sk
5df39f984a BE: docker version github action work #1320
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 12:00:18 +11:00
jokob-sk
d007ed711a BE: docker version github action work #1320
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 11:58:11 +11:00
jokob-sk
61824abb9f BE: restore previous version retrieval as a test #1320
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 11:21:24 +11:00
jokob-sk
33c5548fe1 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-11-30 11:15:25 +11:00
jokob-sk
fd41c395ae DOCS: old link removal
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 11:15:19 +11:00
jokob-sk
1a980844f0 BE: restore previous verison retrieval as a test #1320
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 11:14:45 +11:00
jokob-sk
82e018e284 FE: more defensive network topology hierarchy check #1308
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 10:55:08 +11:00
jokob-sk
e0e1233b1c DOCS: migration docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 10:27:33 +11:00
jokob-sk
74677f940e FE: more defensive network topology hierarchy check #1308
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 10:27:23 +11:00
Jokob @NetAlertX
21a4d20579 Merge pull request #1317 from mmomjian/main
Fix typo in warning message for read-only mode
2025-11-29 23:17:43 +00:00
jokob-sk
9634e4e0f7 FE: YYYY-DD-MM timestamp handling #1312
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-11-30 09:36:56 +11:00
Matthew Momjian
59b417705e Fix typo in warning message for read-only mode 2025-11-29 11:02:42 -05:00
Adam Outler
531b66effe Coderabit changes 2025-11-29 02:44:55 +00:00
Adam Outler
5e4ad10fe0 Tidy up 2025-11-28 21:13:20 +00:00
Adam Outler
541b932b6d Add MCP to existing OpenAPI 2025-11-28 14:12:06 -05:00
Adam Outler
2bf3ff9f00 Add MCP server 2025-11-28 17:03:18 +00:00
36 changed files with 2618 additions and 1205 deletions

View File

@@ -25,7 +25,7 @@
// even within this container and connect to them as needed. // even within this container and connect to them as needed.
// "--network=host", // "--network=host",
], ],
"mounts": [ "mounts": [
"source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" //used for testing various conditions in docker "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" //used for testing various conditions in docker
], ],
// ATTENTION: If running with --network=host, COMMENT `forwardPorts` OR ELSE THERE WILL BE NO WEBUI! // ATTENTION: If running with --network=host, COMMENT `forwardPorts` OR ELSE THERE WILL BE NO WEBUI!
@@ -88,7 +88,7 @@
} }
}, },
"terminal.integrated.defaultProfile.linux": "zsh", "terminal.integrated.defaultProfile.linux": "zsh",
// Python testing configuration // Python testing configuration
"python.testing.pytestEnabled": true, "python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false, "python.testing.unittestEnabled": false,

View File

@@ -39,6 +39,7 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
## API/Endpoints quick map ## API/Endpoints quick map
- Flask app: `server/api_server/api_server_start.py` exposes routes like `/device/<mac>`, `/devices`, `/devices/export/{csv,json}`, `/devices/import`, `/devices/totals`, `/devices/by-status`, plus `nettools`, `events`, `sessions`, `dbquery`, `metrics`, `sync`. - Flask app: `server/api_server/api_server_start.py` exposes routes like `/device/<mac>`, `/devices`, `/devices/export/{csv,json}`, `/devices/import`, `/devices/totals`, `/devices/by-status`, plus `nettools`, `events`, `sessions`, `dbquery`, `metrics`, `sync`.
- Authorization: all routes expect header `Authorization: Bearer <API_TOKEN>` via `get_setting_value('API_TOKEN')`. - Authorization: all routes expect header `Authorization: Bearer <API_TOKEN>` via `get_setting_value('API_TOKEN')`.
- All responses need to return `"success":<False:True>` and if `False` an "error" message needs to be returned, e.g. `{"success": False, "error": f"No stored open ports for Device"}`
## Conventions & helpers to reuse ## Conventions & helpers to reuse
- Settings: add/modify via `ccd()` in `server/initialise.py` or perplugin manifest. Never hardcode ports or secrets; use `get_setting_value()`. - Settings: add/modify via `ccd()` in `server/initialise.py` or perplugin manifest. Never hardcode ports or secrets; use `get_setting_value()`.
@@ -85,7 +86,7 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
- Above all, use the simplest possible code that meets the need so it can be easily audited and maintained. - Above all, use the simplest possible code that meets the need so it can be easily audited and maintained.
- Always leave logging enabled. If there is a possiblity it will be difficult to debug with current logging, add more logging. - Always leave logging enabled. If there is a possiblity it will be difficult to debug with current logging, add more logging.
- Always run the testFailure tool before executing any tests to gather current failure information and avoid redundant runs. - Always run the testFailure tool before executing any tests to gather current failure information and avoid redundant runs.
- Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first. - Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first.
- Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the exisiting testFailures and runTests results. - Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the exisiting testFailures and runTests results.
- Environment tools are designed specifically for your use in this project and running them in this order will give you the best results. - Environment tools are designed specifically for your use in this project and running them in this order will give you the best results.

View File

@@ -47,6 +47,12 @@ jobs:
id: get_version id: get_version
run: echo "version=Dev" >> $GITHUB_OUTPUT run: echo "version=Dev" >> $GITHUB_OUTPUT
# --- debug output
- name: Debug version
run: |
echo "GITHUB_REF: $GITHUB_REF"
echo "Version: '${{ steps.get_version.outputs.version }}'"
# --- Write the timestamped version to .VERSION file # --- Write the timestamped version to .VERSION file
- name: Create .VERSION file - name: Create .VERSION file
run: echo "${{ steps.timestamp.outputs.version }}" > .VERSION run: echo "${{ steps.timestamp.outputs.version }}" > .VERSION

View File

@@ -32,14 +32,34 @@ jobs:
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
# --- Previous approach Get release version from tag
- name: Set up dynamic build ARGs
id: getargs
run: echo "version=$(cat ./stable/VERSION)" >> $GITHUB_OUTPUT
- name: Get release version
id: get_version_prev
run: echo "::set-output name=version::${GITHUB_REF#refs/tags/}"
- name: Create .VERSION file
run: echo "${{ steps.get_version.outputs.version }}" >> .VERSION_PREV
# --- Get release version from tag # --- Get release version from tag
- name: Get release version - name: Get release version
id: get_version id: get_version
run: echo "version=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT run: echo "version=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
# --- debug output
- name: Debug version
run: |
echo "GITHUB_REF: $GITHUB_REF"
echo "Version: '${{ steps.get_version.outputs.version }}'"
echo "Version prev: '${{ steps.get_version_prev.outputs.version }}'"
# --- Write version to .VERSION file # --- Write version to .VERSION file
- name: Create .VERSION file - name: Create .VERSION file
run: echo "${{ steps.get_version.outputs.version }}" > .VERSION run: echo -n "${{ steps.get_version.outputs.version }}" > .VERSION
# --- Generate Docker metadata and tags # --- Generate Docker metadata and tags
- name: Docker meta - name: Docker meta

1
.gitignore vendored
View File

@@ -11,6 +11,7 @@ nohup.out
config/* config/*
.ash_history .ash_history
.VERSION .VERSION
.VERSION_PREV
config/pialert.conf config/pialert.conf
config/app.conf config/app.conf
db/* db/*

View File

@@ -1,16 +1,16 @@
# The NetAlertX Dockerfile has 3 stages: # The NetAlertX Dockerfile has 3 stages:
# #
# Stage 1. Builder - NetAlertX Requires special tools and packages to build our virtual environment, but # Stage 1. Builder - NetAlertX Requires special tools and packages to build our virtual environment, but
# which are not needed in future stages. We build the builder and extract the venv for runner to use as # which are not needed in future stages. We build the builder and extract the venv for runner to use as
# a base. # a base.
# #
# Stage 2. Runner builds the bare minimum requirements to create an operational NetAlertX. The primary # Stage 2. Runner builds the bare minimum requirements to create an operational NetAlertX. The primary
# reason for breaking at this stage is it leaves the system in a proper state for devcontainer operation # reason for breaking at this stage is it leaves the system in a proper state for devcontainer operation
# This image also provides a break-out point for uses who wish to execute the anti-pattern of using a # This image also provides a break-out point for uses who wish to execute the anti-pattern of using a
# docker container as a VM for experimentation and various development patterns. # docker container as a VM for experimentation and various development patterns.
# #
# Stage 3. Hardened removes root, sudoers, folders, permissions, and locks the system down into a read-only # Stage 3. Hardened removes root, sudoers, folders, permissions, and locks the system down into a read-only
# compatible image. While NetAlertX does require some read-write operations, this image can guarantee the # compatible image. While NetAlertX does require some read-write operations, this image can guarantee the
# code pushed out by the project is the only code which will run on the system after each container restart. # code pushed out by the project is the only code which will run on the system after each container restart.
# It reduces the chance of system hijacking and operates with all modern security protocols in place as is # It reduces the chance of system hijacking and operates with all modern security protocols in place as is
# expected from a security appliance. # expected from a security appliance.
@@ -26,10 +26,10 @@ ENV PATH="/opt/venv/bin:$PATH"
# Install build dependencies # Install build dependencies
COPY requirements.txt /tmp/requirements.txt COPY requirements.txt /tmp/requirements.txt
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \ RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git rust cargo \
&& python -m venv /opt/venv && python -m venv /opt/venv
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy # Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands # into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
# together makes for a slightly smaller image size. # together makes for a slightly smaller image size.
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \ RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
@@ -95,11 +95,11 @@ ENV READ_WRITE_FOLDERS="${NETALERTX_DATA} ${NETALERTX_CONFIG} ${NETALERTX_DB} ${
${SYSTEM_SERVICES_ACTIVE_CONFIG}" ${SYSTEM_SERVICES_ACTIVE_CONFIG}"
#Python environment #Python environment
ENV PYTHONUNBUFFERED=1 ENV PYTHONUNBUFFERED=1
ENV VIRTUAL_ENV=/opt/venv ENV VIRTUAL_ENV=/opt/venv
ENV VIRTUAL_ENV_BIN=/opt/venv/bin ENV VIRTUAL_ENV_BIN=/opt/venv/bin
ENV PYTHONPATH=${NETALERTX_APP}:${NETALERTX_SERVER}:${NETALERTX_PLUGINS}:${VIRTUAL_ENV}/lib/python3.12/site-packages ENV PYTHONPATH=${NETALERTX_APP}:${NETALERTX_SERVER}:${NETALERTX_PLUGINS}:${VIRTUAL_ENV}/lib/python3.12/site-packages
ENV PATH="${SYSTEM_SERVICES}:${VIRTUAL_ENV_BIN}:$PATH" ENV PATH="${SYSTEM_SERVICES}:${VIRTUAL_ENV_BIN}:$PATH"
# App Environment # App Environment
ENV LISTEN_ADDR=0.0.0.0 ENV LISTEN_ADDR=0.0.0.0
@@ -110,7 +110,7 @@ ENV VENDORSPATH_NEWEST=${SYSTEM_SERVICES_RUN_TMP}/ieee-oui.txt
ENV ENVIRONMENT=alpine ENV ENVIRONMENT=alpine
ENV READ_ONLY_USER=readonly READ_ONLY_GROUP=readonly ENV READ_ONLY_USER=readonly READ_ONLY_GROUP=readonly
ENV NETALERTX_USER=netalertx NETALERTX_GROUP=netalertx ENV NETALERTX_USER=netalertx NETALERTX_GROUP=netalertx
ENV LANG=C.UTF-8 ENV LANG=C.UTF-8
RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \ RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \
@@ -138,6 +138,7 @@ RUN install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FO
# Copy version information into the image # Copy version information into the image
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION_PREV
# Copy the virtualenv from the builder stage # Copy the virtualenv from the builder stage
COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV} COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
@@ -147,12 +148,12 @@ COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
# This is done after the copy of the venv to ensure the venv is in place # This is done after the copy of the venv to ensure the venv is in place
# although it may be quicker to do it before the copy, it keeps the image # although it may be quicker to do it before the copy, it keeps the image
# layers smaller to do it after. # layers smaller to do it after.
RUN if [ -f '.VERSION' ]; then \ RUN for vfile in .VERSION .VERSION_PREV; do \
cp '.VERSION' "${NETALERTX_APP}/.VERSION"; \ if [ ! -f "${NETALERTX_APP}/${vfile}" ]; then \
else \ echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/${vfile}"; \
echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/.VERSION"; \ fi; \
fi && \ chown 20212:20212 "${NETALERTX_APP}/${vfile}"; \
chown 20212:20212 "${NETALERTX_APP}/.VERSION" && \ done && \
apk add --no-cache libcap && \ apk add --no-cache libcap && \
setcap cap_net_raw+ep /bin/busybox && \ setcap cap_net_raw+ep /bin/busybox && \
setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \ setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \
@@ -180,7 +181,7 @@ ENV UMASK=0077
# Create readonly user and group with no shell access. # Create readonly user and group with no shell access.
# Readonly user marks folders that are created by NetAlertX, but should not be modified. # Readonly user marks folders that are created by NetAlertX, but should not be modified.
# AI may claim this is stupid, but it's actually least possible permissions as # AI may claim this is stupid, but it's actually least possible permissions as
# read-only user cannot login, cannot sudo, has no write permission, and cannot even # read-only user cannot login, cannot sudo, has no write permission, and cannot even
# read the files it owns. The read-only user is ownership-as-a-lock hardening pattern. # read the files it owns. The read-only user is ownership-as-a-lock hardening pattern.
RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \ RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \

View File

@@ -239,29 +239,7 @@ services:
4. Start the container and verify everything works as expected. 4. Start the container and verify everything works as expected.
5. Stop the container. 5. Stop the container.
6. Perform a one-off migration to the latest `netalertx` image and `20211` user: 6. Update the `docker-compose.yml` as per example below.
> [!NOTE]
> The example below assumes your `/config` and `/db` folders are stored in `local_data_dir`.
> Replace this path with your actual configuration directory. `netalertx` is the container name, which might differ from your setup.
```sh
docker run -it --rm --name netalertx --user "0" \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
ghcr.io/jokob-sk/netalertx:latest
```
..or alternatively execute:
```bash
sudo chown -R 20211:20211 /local_data_dir
sudo chmod -R a+rwx /local_data_dir/
```
7. Stop the container
8. Update the `docker-compose.yml` as per example below.
```yaml ```yaml
services: services:
@@ -288,5 +266,33 @@ services:
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime" - "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
# 🆕 New "tmpfs" section END 🔼 # 🆕 New "tmpfs" section END 🔼
``` ```
7. Perform a one-off migration to the latest `netalertx` image and `20211` user.
9. Start the container and verify everything works as expected. > [!NOTE]
> The examples below assumes your `/config` and `/db` folders are stored in `local_data_dir`.
> Replace this path with your actual configuration directory. `netalertx` is the container name, which might differ from your setup.
**Automated approach**:
Run the container with the `--user "0"` parameter. Please note, some systems will require the manual approach below.
```sh
docker run -it --rm --name netalertx --user "0" \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
ghcr.io/jokob-sk/netalertx:latest
```
Stop the container and run it as you would normally.
**Manual approach**:
Use the manual approach if the Automated approach fails. Execute the below commands:
```bash
sudo chown -R 20211:20211 /local_data_dir
sudo chmod -R a+rwx /local_data_dir
```
8. Start the container and verify everything works as expected.

View File

@@ -13,13 +13,13 @@ There is also an in-app Help / FAQ section that should be answering frequently a
#### 🐳 Docker (Fully supported) #### 🐳 Docker (Fully supported)
- The main installation method is as a [docker container - follow these instructions here](./DOCKER_INSTALLATION.md). - The main installation method is as a [docker container - follow these instructions here](./DOCKER_INSTALLATION.md).
#### 💻 Bare-metal / On-server (Experimental/community supported 🧪) #### 💻 Bare-metal / On-server (Experimental/community supported 🧪)
- [(Experimental 🧪) On-hardware instructions](./HW_INSTALL.md) - [(Experimental 🧪) On-hardware instructions](./HW_INSTALL.md)
- Alternative bare-metal install forks: - Alternative bare-metal install forks:
- [leiweibau's fork](https://github.com/leiweibau/Pi.Alert/) (maintained) - [leiweibau's fork](https://github.com/leiweibau/Pi.Alert/) (maintained)
- [pucherot's original code](https://github.com/pucherot/Pi.Alert/) (un-maintained) - [pucherot's original code](https://github.com/pucherot/Pi.Alert/) (un-maintained)
@@ -63,7 +63,6 @@ There is also an in-app Help / FAQ section that should be answering frequently a
#### ♻ Misc #### ♻ Misc
- [Version history (legacy)](./VERSIONS_HISTORY.md)
- [Reverse proxy (Nginx, Apache, SWAG)](./REVERSE_PROXY.md) - [Reverse proxy (Nginx, Apache, SWAG)](./REVERSE_PROXY.md)
- [Installing Updates](./UPDATES.md) - [Installing Updates](./UPDATES.md)
- [Setting up Authelia](./AUTHELIA.md) (DRAFT) - [Setting up Authelia](./AUTHELIA.md) (DRAFT)
@@ -80,27 +79,27 @@ There is also an in-app Help / FAQ section that should be answering frequently a
- [Frontend development tips](./FRONTEND_DEVELOPMENT.md) - [Frontend development tips](./FRONTEND_DEVELOPMENT.md)
- [Webhook secrets](./WEBHOOK_SECRET.md) - [Webhook secrets](./WEBHOOK_SECRET.md)
Feel free to suggest or submit new docs via a PR. Feel free to suggest or submit new docs via a PR.
## 👨‍💻 Development priorities ## 👨‍💻 Development priorities
Priorities from highest to lowest: Priorities from highest to lowest:
* 🔼 Fixing core functionality bugs not solvable with workarounds * 🔼 Fixing core functionality bugs not solvable with workarounds
* 🔵 New core functionality unlocking other opportunities (e.g.: plugins) * 🔵 New core functionality unlocking other opportunities (e.g.: plugins)
* 🔵 Refactoring enabling faster implementation of future functionality * 🔵 Refactoring enabling faster implementation of future functionality
* 🔽 (low) UI functionality & improvements (PRs welcome 😉) * 🔽 (low) UI functionality & improvements (PRs welcome 😉)
Design philosophy: Focus on core functionality and leverage existing apps and tools to make NetAlertX integrate into other workflows. Design philosophy: Focus on core functionality and leverage existing apps and tools to make NetAlertX integrate into other workflows.
Examples: Examples:
1. Supporting apprise makes more sense than implementing multiple individual notification gateways 1. Supporting apprise makes more sense than implementing multiple individual notification gateways
2. Implementing regular expression support across settings for validation makes more sense than validating one setting with a specific expression. 2. Implementing regular expression support across settings for validation makes more sense than validating one setting with a specific expression.
UI-specific requests are a low priority as the framework picked by the original developer is not very extensible (and afaik doesn't support components) and has limited mobile support. Also, I argue the value proposition is smaller than working on something else. UI-specific requests are a low priority as the framework picked by the original developer is not very extensible (and afaik doesn't support components) and has limited mobile support. Also, I argue the value proposition is smaller than working on something else.
Feel free to submit PRs if interested. try to **keep the PRs small/on-topic** so they are easier to review and approve. Feel free to submit PRs if interested. try to **keep the PRs small/on-topic** so they are easier to review and approve.
That being said, I'd reconsider if more people and or recurring sponsors file a request 😉. That being said, I'd reconsider if more people and or recurring sponsors file a request 😉.
@@ -112,8 +111,8 @@ Please be as detailed as possible with **workarounds** you considered and why a
If you submit a PR please: If you submit a PR please:
1. Check that your changes are backward compatible with existing installations and with a blank setup. 1. Check that your changes are backward compatible with existing installations and with a blank setup.
2. Existing features should always be preserved. 2. Existing features should always be preserved.
3. Keep the PR small, on-topic and don't change code that is not necessary for the PR to work 3. Keep the PR small, on-topic and don't change code that is not necessary for the PR to work
4. New features code should ideally be re-usable for different purposes, not for a very narrow use case. 4. New features code should ideally be re-usable for different purposes, not for a very narrow use case.
5. New functionality should ideally be implemented via the Plugins system, if possible. 5. New functionality should ideally be implemented via the Plugins system, if possible.
@@ -131,13 +130,13 @@ Suggested test cases:
Some additional context: Some additional context:
* Permanent settings/config is stored in the `app.conf` file * Permanent settings/config is stored in the `app.conf` file
* Currently temporary (session?) settings are stored in the `Parameters` DB table as key-value pairs. This table is wiped during a container rebuild/restart and its values are re-initialized from cookies/session data from the browser. * Currently temporary (session?) settings are stored in the `Parameters` DB table as key-value pairs. This table is wiped during a container rebuild/restart and its values are re-initialized from cookies/session data from the browser.
## 🐛 Submitting an issue or bug ## 🐛 Submitting an issue or bug
Before submitting a new issue please spend a couple of minutes on research: Before submitting a new issue please spend a couple of minutes on research:
* Check [🛑 Common issues](./DEBUG_TIPS.md#common-issues) * Check [🛑 Common issues](./DEBUG_TIPS.md#common-issues)
* Check [💡 Closed issues](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed) if a similar issue was solved in the past. * Check [💡 Closed issues](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed) if a similar issue was solved in the past.
* When submitting an issue ❗[enable debug](./DEBUG_TIPS.md)❗ * When submitting an issue ❗[enable debug](./DEBUG_TIPS.md)❗

View File

@@ -378,7 +378,7 @@ function localizeTimestamp(input) {
let tz = getSetting("TIMEZONE") || 'Europe/Berlin'; let tz = getSetting("TIMEZONE") || 'Europe/Berlin';
input = String(input || '').trim(); input = String(input || '').trim();
// 1. Unix timestamps (10 or 13 digits) // 1. Unix timestamps (10 or 13 digits)
if (/^\d+$/.test(input)) { if (/^\d+$/.test(input)) {
const ms = input.length === 10 ? parseInt(input, 10) * 1000 : parseInt(input, 10); const ms = input.length === 10 ? parseInt(input, 10) * 1000 : parseInt(input, 10);
return new Intl.DateTimeFormat('default', { return new Intl.DateTimeFormat('default', {
@@ -389,39 +389,59 @@ function localizeTimestamp(input) {
}).format(new Date(ms)); }).format(new Date(ms));
} }
// 2. European DD/MM/YYYY // 2. European DD/MM/YYYY
let match = input.match(/^(\d{1,2})\/(\d{1,2})\/(\d{4})(?:[ ,]+(\d{1,2}:\d{2}(?::\d{2})?))?(.*)$/); let match = input.match(/^(\d{1,2})\/(\d{1,2})\/(\d{4})(?:[ ,]+(\d{1,2}:\d{2}(?::\d{2})?))?$/);
if (match) { if (match) {
let [ , d, m, y, t = "00:00:00", tzPart = "" ] = match; let [, d, m, y, t = "00:00:00", tzPart = ""] = match;
const iso = `${y}-${m.padStart(2,'0')}-${d.padStart(2,'0')}T${t.length===5?t+":00":t}${tzPart}`; const dNum = parseInt(d, 10);
return formatSafe(iso, tz); const mNum = parseInt(m, 10);
if (dNum <= 12 && mNum > 12) {
} else {
const iso = `${y}-${m.padStart(2,'0')}-${d.padStart(2,'0')}T${t.length===5 ? t + ":00" : t}${tzPart}`;
return formatSafe(iso, tz);
}
} }
// 3. US MM/DD/YYYY // 3. US MM/DD/YYYY
match = input.match(/^(\d{1,2})\/(\d{1,2})\/(\d{4})(?:[ ,]+(\d{1,2}:\d{2}(?::\d{2})?))?(.*)$/); match = input.match(/^(\d{1,2})\/(\d{1,2})\/(\d{4})(?:[ ,]+(\d{1,2}:\d{2}(?::\d{2})?))?(.*)$/);
if (match) { if (match) {
let [ , m, d, y, t = "00:00:00", tzPart = "" ] = match; let [, m, d, y, t = "00:00:00", tzPart = ""] = match;
const iso = `${y}-${m.padStart(2,'0')}-${d.padStart(2,'0')}T${t.length===5?t+":00":t}${tzPart}`; const iso = `${y}-${m.padStart(2,'0')}-${d.padStart(2,'0')}T${t.length===5?t+":00":t}${tzPart}`;
return formatSafe(iso, tz); return formatSafe(iso, tz);
} }
// 4. ISO-style (with T, Z, offsets) // 4. ISO YYYY-MM-DD with optional Z/+offset
match = input.match(/^(\d{4}-\d{1,2}-\d{1,2})[ T](\d{1,2}:\d{2}(?::\d{2})?)(Z|[+-]\d{2}:?\d{2})?$/); match = input.match(/^(\d{4})-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01])[ T](\d{1,2}:\d{2}(?::\d{2})?)(Z|[+-]\d{2}:?\d{2})?$/);
if (match) { if (match) {
let [ , ymd, time, offset = "" ] = match; let [, y, m, d, time, offset = ""] = match;
// normalize to YYYY-MM-DD
let [y, m, d] = ymd.split('-').map(x => x.padStart(2,'0'));
const iso = `${y}-${m}-${d}T${time.length===5?time+":00":time}${offset}`; const iso = `${y}-${m}-${d}T${time.length===5?time+":00":time}${offset}`;
return formatSafe(iso, tz); return formatSafe(iso, tz);
} }
// 5. RFC2822 / "25 Aug 2025 13:45:22 +0200" // 5. RFC2822 / "25 Aug 2025 13:45:22 +0200"
match = input.match(/^\d{1,2} [A-Za-z]{3,} \d{4}/); match = input.match(/^\d{1,2} [A-Za-z]{3,} \d{4}/);
if (match) { if (match) {
return formatSafe(input, tz); return formatSafe(input, tz);
} }
// 6. Fallback (whatever Date() can parse) // 6. DD-MM-YYYY with optional time
match = input.match(/^(\d{1,2})-(\d{1,2})-(\d{4})(?:[ T](\d{1,2}:\d{2}(?::\d{2})?))?$/);
if (match) {
let [, d, m, y, time = "00:00:00"] = match;
const iso = `${y}-${m.padStart(2,'0')}-${d.padStart(2,'0')}T${time.length===5?time+":00":time}`;
return formatSafe(iso, tz);
}
// 7. Strict YYYY-DD-MM with optional time
match = input.match(/^(\d{4})-(0[1-9]|[12]\d|3[01])-(0[1-9]|1[0-2])(?:[ T](\d{1,2}:\d{2}(?::\d{2})?))?$/);
if (match) {
let [, y, d, m, time = "00:00:00"] = match;
const iso = `${y}-${m}-${d}T${time.length === 5 ? time + ":00" : time}`;
return formatSafe(iso, tz);
}
// 8. Fallback
return formatSafe(input, tz); return formatSafe(input, tz);
function formatSafe(str, tz) { function formatSafe(str, tz) {
@@ -440,6 +460,7 @@ function localizeTimestamp(input) {
} }
// ---------------------------------------------------- // ----------------------------------------------------
/** /**
* Replaces double quotes within single-quoted strings, then converts all single quotes to double quotes, * Replaces double quotes within single-quoted strings, then converts all single quotes to double quotes,
@@ -1629,7 +1650,7 @@ async function executeOnce() {
await cacheSettings(); await cacheSettings();
await cacheStrings(); await cacheStrings();
console.log("All AJAX callbacks have completed"); console.log("All AJAX callbacks have completed");
onAllCallsComplete(); onAllCallsComplete();
} catch (error) { } catch (error) {
console.error("Error:", error); console.error("Error:", error);

View File

@@ -1,26 +1,26 @@
<?php <?php
require 'php/templates/header.php'; require 'php/templates/header.php';
require 'php/templates/modals.php'; require 'php/templates/modals.php';
?> ?>
<script> <script>
// show spinning icon // show spinning icon
showSpinner() showSpinner()
</script> </script>
<!-- Page ------------------------------------------------------------------ --> <!-- Page ------------------------------------------------------------------ -->
<div class="content-wrapper"> <div class="content-wrapper">
<span class="helpIcon"> <span class="helpIcon">
<a target="_blank" href="https://github.com/jokob-sk/NetAlertX/blob/main/docs/NETWORK_TREE.md"> <a target="_blank" href="https://github.com/jokob-sk/NetAlertX/blob/main/docs/NETWORK_TREE.md">
<i class="fa fa-circle-question"></i> <i class="fa fa-circle-question"></i>
</a> </a>
</span> </span>
<div id="toggleFilters" class=""> <div id="toggleFilters" class="">
<div class="checkbox icheck col-xs-12"> <div class="checkbox icheck col-xs-12">
<label> <label>
<input type="checkbox" name="showOffline" checked> <input type="checkbox" name="showOffline" checked>
<div style="margin-left: 10px; display: inline-block; vertical-align: top;"> <div style="margin-left: 10px; display: inline-block; vertical-align: top;">
<?= lang('Network_ShowOffline');?> <?= lang('Network_ShowOffline');?>
<span id="showOfflineNumber"> <span id="showOfflineNumber">
<!-- placeholder --> <!-- placeholder -->
@@ -31,14 +31,14 @@
<div class="checkbox icheck col-xs-12"> <div class="checkbox icheck col-xs-12">
<label> <label>
<input type="checkbox" name="showArchived"> <input type="checkbox" name="showArchived">
<div style="margin-left: 10px; display: inline-block; vertical-align: top;"> <div style="margin-left: 10px; display: inline-block; vertical-align: top;">
<?= lang('Network_ShowArchived');?> <?= lang('Network_ShowArchived');?>
<span id="showArchivedNumber"> <span id="showArchivedNumber">
<!-- placeholder --> <!-- placeholder -->
</span> </span>
</div> </div>
</label> </label>
</div> </div>
</div> </div>
<div id="networkTree" class="drag"> <div id="networkTree" class="drag">
@@ -55,8 +55,8 @@
</div> </div>
<div class="tab-content"> <div class="tab-content">
<!-- Placeholder --> <!-- Placeholder -->
</div> </div>
</section> </section>
<section id="unassigned-devices-wrapper"> <section id="unassigned-devices-wrapper">
<!-- Placeholder --> <!-- Placeholder -->
</section> </section>
@@ -69,7 +69,7 @@
require 'php/templates/footer.php'; require 'php/templates/footer.php';
?> ?>
<script src="lib/treeviz/bundle.js"></script> <script src="lib/treeviz/bundle.js"></script>
<script defer> <script defer>
@@ -78,12 +78,12 @@
// Create Top level tabs (List of network devices), explanation of the terminology below: // Create Top level tabs (List of network devices), explanation of the terminology below:
// //
// Switch 1 (node) // Switch 1 (node)
// /(p1) \ (p2) <----- port numbers // /(p1) \ (p2) <----- port numbers
// / \ // / \
// Smart TV (leaf) Switch 2 (node (for the PC) and leaf (for Switch 1)) // Smart TV (leaf) Switch 2 (node (for the PC) and leaf (for Switch 1))
// \ // \
// PC (leaf) <------- leafs are not included in this SQL query // PC (leaf) <------- leafs are not included in this SQL query
const rawSql = ` const rawSql = `
SELECT node_name, node_mac, online, node_type, node_ports_count, parent_mac, node_icon, node_alert SELECT node_name, node_mac, online, node_type, node_ports_count, parent_mac, node_icon, node_alert
FROM ( FROM (
@@ -120,7 +120,7 @@
const portLabel = node.node_ports_count ? ` (${node.node_ports_count})` : ''; const portLabel = node.node_ports_count ? ` (${node.node_ports_count})` : '';
const icon = atob(node.node_icon); const icon = atob(node.node_icon);
const id = node.node_mac.replace(/:/g, '_'); const id = node.node_mac.replace(/:/g, '_');
html += ` html += `
<li class="networkNodeTabHeaders ${i === 0 ? 'active' : ''}"> <li class="networkNodeTabHeaders ${i === 0 ? 'active' : ''}">
@@ -137,13 +137,13 @@
renderNetworkTabContent(nodes); renderNetworkTabContent(nodes);
// init selected (first) tab // init selected (first) tab
initTab(); initTab();
// init selected node highlighting // init selected node highlighting
initSelectedNodeHighlighting() initSelectedNodeHighlighting()
// Register events on tab change // Register events on tab change
$('a[data-toggle="tab"]').on('shown.bs.tab', function (e) { $('a[data-toggle="tab"]').on('shown.bs.tab', function (e) {
initSelectedNodeHighlighting() initSelectedNodeHighlighting()
}); });
} }
@@ -205,10 +205,10 @@
<hr/> <hr/>
<div class="box box-aqua box-body" id="connected"> <div class="box box-aqua box-body" id="connected">
<h5> <h5>
<i class="fa fa-sitemap fa-rotate-270"></i> <i class="fa fa-sitemap fa-rotate-270"></i>
${getString('Network_Connected')} ${getString('Network_Connected')}
</h5> </h5>
<div id="leafs_${id}" class="table-responsive"></div> <div id="leafs_${id}" class="table-responsive"></div>
</div> </div>
</div> </div>
@@ -234,9 +234,9 @@
return; return;
} }
$container.html(wrapperHtml); $container.html(wrapperHtml);
const $table = $(`#${tableId}`); const $table = $(`#${tableId}`);
const columns = [ const columns = [
@@ -298,7 +298,7 @@
title: getString('Device_TableHead_Vendor'), title: getString('Device_TableHead_Vendor'),
data: 'devVendor', data: 'devVendor',
width: '20%' width: '20%'
} }
].filter(Boolean); ].filter(Boolean);
@@ -356,7 +356,7 @@
function loadConnectedDevices(node_mac) { function loadConnectedDevices(node_mac) {
const sql = ` const sql = `
SELECT devName, devMac, devLastIP, devVendor, devPresentLastScan, devAlertDown, devParentPort, SELECT devName, devMac, devLastIP, devVendor, devPresentLastScan, devAlertDown, devParentPort,
CASE CASE
WHEN devIsNew = 1 THEN 'New' WHEN devIsNew = 1 THEN 'New'
WHEN devPresentLastScan = 1 THEN 'On-line' WHEN devPresentLastScan = 1 THEN 'On-line'
WHEN devPresentLastScan = 0 AND devAlertDown != 0 THEN 'Down' WHEN devPresentLastScan = 0 AND devAlertDown != 0 THEN 'Down'
@@ -371,7 +371,7 @@
const wrapperHtml = ` const wrapperHtml = `
<table class="table table-bordered table-striped node-leafs-table " id="table_leafs_${id}" data-node-mac="${node_mac}"> <table class="table table-bordered table-striped node-leafs-table " id="table_leafs_${id}" data-node-mac="${node_mac}">
</table>`; </table>`;
loadDeviceTable({ loadDeviceTable({
@@ -414,12 +414,12 @@
$.get(apiUrl, function (data) { $.get(apiUrl, function (data) {
console.log(data); console.log(data);
const parsed = JSON.parse(data); const parsed = JSON.parse(data);
const allDevices = parsed; const allDevices = parsed;
console.log(allDevices); console.log(allDevices);
if (!allDevices || allDevices.length === 0) { if (!allDevices || allDevices.length === 0) {
showModalOK(getString('Gen_Warning'), getString('Network_NoDevices')); showModalOK(getString('Gen_Warning'), getString('Network_NoDevices'));
@@ -439,7 +439,7 @@
{ {
$('#showArchivedNumber').text(`(${archivedCount})`); $('#showArchivedNumber').text(`(${archivedCount})`);
} }
if(offlineCount > 0) if(offlineCount > 0)
{ {
$('#showOfflineNumber').text(`(${offlineCount})`); $('#showOfflineNumber').text(`(${offlineCount})`);
@@ -501,7 +501,7 @@ var visibleNodesCount = 0;
var parentNodesCount = 0; var parentNodesCount = 0;
var hiddenMacs = []; // hidden children var hiddenMacs = []; // hidden children
var hiddenChildren = []; var hiddenChildren = [];
var deviceListGlobal = null; var deviceListGlobal = null;
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Recursively get children nodes and build a tree // Recursively get children nodes and build a tree
@@ -521,13 +521,17 @@ function getChildren(node, list, path, visited = [])
// Loop through all items to find children of the current node // Loop through all items to find children of the current node
for (var i in list) { for (var i in list) {
if (list[i].devParentMAC.toLowerCase() == node.devMac.toLowerCase() && !hiddenMacs.includes(list[i].devParentMAC)) { const item = list[i];
const parentMac = item.devParentMAC || ""; // null-safe
const nodeMac = node.devMac || ""; // null-safe
visibleNodesCount++; if (parentMac != "" && parentMac.toLowerCase() == nodeMac.toLowerCase() && !hiddenMacs.includes(parentMac)) {
// Process children recursively, passing a copy of the visited list visibleNodesCount++;
children.push(getChildren(list[i], list, path + ((path == "") ? "" : '|') + list[i].devParentMAC, visited));
} // Process children recursively, passing a copy of the visited list
children.push(getChildren(list[i], list, path + ((path == "") ? "" : '|') + parentMac, visited));
}
} }
// Track leaf and parent node counts // Track leaf and parent node counts
@@ -537,7 +541,7 @@ function getChildren(node, list, path, visited = [])
parentNodesCount++; parentNodesCount++;
} }
return { return {
name: node.devName, name: node.devName,
path: path, path: path,
mac: node.devMac, mac: node.devMac,
@@ -562,19 +566,32 @@ function getChildren(node, list, path, visited = [])
}; };
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
function getHierarchy() function getHierarchy()
{ {
let internetNode = null;
for(i in deviceListGlobal) for(i in deviceListGlobal)
{ {
if(deviceListGlobal[i].devMac == 'Internet') if(deviceListGlobal[i].devMac == 'Internet')
{ {
return (getChildren(deviceListGlobal[i], deviceListGlobal, '')) internetNode = deviceListGlobal[i];
return (getChildren(internetNode, deviceListGlobal, ''))
break; break;
} }
} }
if (!internetNode) {
showModalOk(
getString('Network_Configuration_Error'),
getString('Network_Root_Not_Configured')
);
console.error("getHierarchy(): Internet node not found");
return null;
}
} }
//--------------------------------------------------------------------------- //---------------------------------------------------------------------------
function toggleSubTree(parentMac, treePath) function toggleSubTree(parentMac, treePath)
{ {
@@ -593,33 +610,33 @@ function toggleSubTree(parentMac, treePath)
myTree.refresh(updatedTree); myTree.refresh(updatedTree);
// re-attach any onclick events // re-attach any onclick events
attachTreeEvents(); attachTreeEvents();
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
function attachTreeEvents() function attachTreeEvents()
{ {
// toggle subtree functionality // toggle subtree functionality
$("div[data-mytreemac]").each(function(){ $("div[data-mytreemac]").each(function(){
$(this).attr('onclick', 'toggleSubTree("'+$(this).attr('data-mytreemac')+'","'+ $(this).attr('data-mytreepath')+'")') $(this).attr('onclick', 'toggleSubTree("'+$(this).attr('data-mytreemac')+'","'+ $(this).attr('data-mytreepath')+'")')
}); });
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Handle network node click - select correct tab in the bottom table // Handle network node click - select correct tab in the bottom table
function handleNodeClick(el) function handleNodeClick(el)
{ {
isNetworkDevice = $(el).data("devisnetworknodedynamic") == 1; isNetworkDevice = $(el).data("devisnetworknodedynamic") == 1;
targetTabMAC = "" targetTabMAC = ""
thisDevMac= $(el).data("mac"); thisDevMac= $(el).data("mac");
if (isNetworkDevice == false) if (isNetworkDevice == false)
{ {
targetTabMAC = $(el).data("parentmac"); targetTabMAC = $(el).data("parentmac");
} else } else
{ {
targetTabMAC = thisDevMac; targetTabMAC = thisDevMac;
} }
var targetTab = $(`a[data-mytabmac="${targetTabMAC}"]`); var targetTab = $(`a[data-mytabmac="${targetTabMAC}"]`);
@@ -628,8 +645,8 @@ function handleNodeClick(el)
// Simulate a click event on the target tab // Simulate a click event on the target tab
targetTab.click(); targetTab.click();
} }
if (isNetworkDevice) { if (isNetworkDevice) {
// Smooth scroll to the tab content // Smooth scroll to the tab content
@@ -639,7 +656,7 @@ function handleNodeClick(el)
} else { } else {
$("tr.selected").removeClass("selected"); $("tr.selected").removeClass("selected");
$(`tr[data-mac="${thisDevMac}"]`).addClass("selected"); $(`tr[data-mac="${thisDevMac}"]`).addClass("selected");
const tableId = "table_leafs_" + targetTabMAC.replace(/:/g, '_'); const tableId = "table_leafs_" + targetTabMAC.replace(/:/g, '_');
const $table = $(`#${tableId}`).DataTable(); const $table = $(`#${tableId}`).DataTable();
@@ -669,10 +686,8 @@ function handleNodeClick(el)
} }
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
var myTree; var myTree;
var emSize; var emSize;
var nodeHeight; var nodeHeight;
// var sizeCoefficient = 1.4 // var sizeCoefficient = 1.4
@@ -689,140 +704,139 @@ function emToPx(em, element) {
function initTree(myHierarchy) function initTree(myHierarchy)
{ {
// calculate the drawing area based on teh tree width and available screen size if(myHierarchy && myHierarchy.type !== "")
let baseFontSize = parseFloat($('html').css('font-size'));
let treeAreaHeight = ($(window).height() - 155); ;
// calculate the font size of the leaf nodes to fit everything into the tree area
leafNodesCount == 0 ? 1 : leafNodesCount;
emSize = pxToEm((treeAreaHeight/(leafNodesCount)).toFixed(2));
let screenWidthEm = pxToEm($('.networkTable').width()-15);
// init the drawing area size
$("#networkTree").attr('style', `height:${treeAreaHeight}px; width:${emToPx(screenWidthEm)}px`)
if(myHierarchy.type == "")
{ {
showModalOk(getString('Network_Configuration_Error'), getString('Network_Root_Not_Configured')) // calculate the drawing area based on the tree width and available screen size
let baseFontSize = parseFloat($('html').css('font-size'));
return; let treeAreaHeight = ($(window).height() - 155); ;
// calculate the font size of the leaf nodes to fit everything into the tree area
leafNodesCount == 0 ? 1 : leafNodesCount;
emSize = pxToEm((treeAreaHeight/(leafNodesCount)).toFixed(2));
let screenWidthEm = pxToEm($('.networkTable').width()-15);
// init the drawing area size
$("#networkTree").attr('style', `height:${treeAreaHeight}px; width:${emToPx(screenWidthEm)}px`)
// handle canvas and node size if only a few nodes
emSize > 1 ? emSize = 1 : emSize = emSize;
let nodeHeightPx = emToPx(emSize*1);
let nodeWidthPx = emToPx(screenWidthEm / (parentNodesCount));
// handle if only a few nodes
nodeWidthPx > 160 ? nodeWidthPx = 160 : nodeWidthPx = nodeWidthPx;
console.log(Treeviz);
myTree = Treeviz.create({
htmlId: "networkTree",
renderNode: nodeData => {
(!emptyArr.includes(nodeData.data.port )) ? port = nodeData.data.port : port = "";
(port == "" || port == 0 || port == 'None' ) ? portBckgIcon = `<i class="fa fa-wifi"></i>` : portBckgIcon = `<i class="fa fa-ethernet"></i>`;
portHtml = (port == "" || port == 0 || port == 'None' ) ? " &nbsp " : port;
// Build HTML for individual nodes in the network diagram
deviceIcon = (!emptyArr.includes(nodeData.data.icon )) ?
`<div class="netIcon">
${atob(nodeData.data.icon)}
</div>` : "";
devicePort = `<div class="netPort"
style="width:${emSize}em;height:${emSize}em">
${portHtml}</div>
<div class="portBckgIcon"
style="margin-left:-${emSize*0.7}em;">
${portBckgIcon}
</div>`;
collapseExpandIcon = nodeData.data.hiddenChildren ?
"square-plus" : "square-minus";
// generate +/- icon if node has children nodes
collapseExpandHtml = nodeData.data.hasChildren ?
`<div class="netCollapse"
style="font-size:${nodeHeightPx/2}px;top:${Math.floor(nodeHeightPx / 4)}px"
data-mytreepath="${nodeData.data.path}"
data-mytreemac="${nodeData.data.mac}">
<i class="fa fa-${collapseExpandIcon} pointer"></i>
</div>` : "";
selectedNodeMac = $(".nav-tabs-custom .active a").attr('data-mytabmac')
highlightedCss = nodeData.data.mac == selectedNodeMac ?
" highlightedNode " : "";
cssNodeType = nodeData.data.devIsNetworkNodeDynamic ?
" node-network-device " : " node-standard-device ";
networkHardwareIcon = nodeData.data.devIsNetworkNodeDynamic ? `<span class="network-hw-icon">
<i class="fa-solid fa-hard-drive"></i>
</span>` : "";
const badgeConf = getStatusBadgeParts(nodeData.data.presentLastScan, nodeData.data.alertDown, nodeData.data.mac, statusText = '')
return result = `<div
class="node-inner hover-node-info box pointer ${highlightedCss} ${cssNodeType}"
style="height:${nodeHeightPx}px;font-size:${nodeHeightPx-5}px;"
onclick="handleNodeClick(this)"
data-mac="${nodeData.data.mac}"
data-parentMac="${nodeData.data.parentMac}"
data-name="${nodeData.data.name}"
data-ip="${nodeData.data.ip}"
data-mac="${nodeData.data.mac}"
data-vendor="${nodeData.data.vendor}"
data-type="${nodeData.data.type}"
data-devIsNetworkNodeDynamic="${nodeData.data.devIsNetworkNodeDynamic}"
data-lastseen="${nodeData.data.lastseen}"
data-firstseen="${nodeData.data.firstseen}"
data-relationship="${nodeData.data.relType}"
data-status="${nodeData.data.status}"
data-present="${nodeData.data.presentLastScan}"
data-alert="${nodeData.data.alertDown}"
data-icon="${nodeData.data.icon}"
>
<div class="netNodeText">
<strong><span>${devicePort} <span class="${badgeConf.cssText}">${deviceIcon}</span></span>
<span class="spanNetworkTree anonymizeDev" style="width:${nodeWidthPx-50}px">${nodeData.data.name}</span>
${networkHardwareIcon}
</strong>
</div>
</div>
${collapseExpandHtml}`;
},
mainAxisNodeSpacing: 'auto',
// secondaryAxisNodeSpacing: 0.3,
nodeHeight: nodeHeightPx,
nodeWidth: nodeWidthPx,
marginTop: '5',
isHorizontal : true,
hasZoom: true,
hasPan: true,
marginLeft: '10',
marginRight: '10',
idKey: "mac",
hasFlatData: false,
relationnalField: "children",
linkWidth: (nodeData) => 2,
linkColor: (nodeData) => {
relConf = getRelationshipConf(nodeData.data.relType)
return relConf.color;
}
// onNodeClick: (nodeData) => handleNodeClick(nodeData),
});
console.log(deviceListGlobal);
myTree.refresh(myHierarchy);
// hide spinning icon
hideSpinner()
} else
{
console.error("getHierarchy() not returning expected result");
} }
// handle canvas and node size if only a few nodes
emSize > 1 ? emSize = 1 : emSize = emSize;
let nodeHeightPx = emToPx(emSize*1);
let nodeWidthPx = emToPx(screenWidthEm / (parentNodesCount));
// handle if only a few nodes
nodeWidthPx > 160 ? nodeWidthPx = 160 : nodeWidthPx = nodeWidthPx;
console.log(Treeviz);
myTree = Treeviz.create({
htmlId: "networkTree",
renderNode: nodeData => {
(!emptyArr.includes(nodeData.data.port )) ? port = nodeData.data.port : port = "";
(port == "" || port == 0 || port == 'None' ) ? portBckgIcon = `<i class="fa fa-wifi"></i>` : portBckgIcon = `<i class="fa fa-ethernet"></i>`;
portHtml = (port == "" || port == 0 || port == 'None' ) ? " &nbsp " : port;
// Build HTML for individual nodes in the network diagram
deviceIcon = (!emptyArr.includes(nodeData.data.icon )) ?
`<div class="netIcon">
${atob(nodeData.data.icon)}
</div>` : "";
devicePort = `<div class="netPort"
style="width:${emSize}em;height:${emSize}em">
${portHtml}</div>
<div class="portBckgIcon"
style="margin-left:-${emSize*0.7}em;">
${portBckgIcon}
</div>`;
collapseExpandIcon = nodeData.data.hiddenChildren ?
"square-plus" : "square-minus";
// generate +/- icon if node has children nodes
collapseExpandHtml = nodeData.data.hasChildren ?
`<div class="netCollapse"
style="font-size:${nodeHeightPx/2}px;top:${Math.floor(nodeHeightPx / 4)}px"
data-mytreepath="${nodeData.data.path}"
data-mytreemac="${nodeData.data.mac}">
<i class="fa fa-${collapseExpandIcon} pointer"></i>
</div>` : "";
selectedNodeMac = $(".nav-tabs-custom .active a").attr('data-mytabmac')
highlightedCss = nodeData.data.mac == selectedNodeMac ?
" highlightedNode " : "";
cssNodeType = nodeData.data.devIsNetworkNodeDynamic ?
" node-network-device " : " node-standard-device ";
networkHardwareIcon = nodeData.data.devIsNetworkNodeDynamic ? `<span class="network-hw-icon">
<i class="fa-solid fa-hard-drive"></i>
</span>` : "";
const badgeConf = getStatusBadgeParts(nodeData.data.presentLastScan, nodeData.data.alertDown, nodeData.data.mac, statusText = '')
return result = `<div
class="node-inner hover-node-info box pointer ${highlightedCss} ${cssNodeType}"
style="height:${nodeHeightPx}px;font-size:${nodeHeightPx-5}px;"
onclick="handleNodeClick(this)"
data-mac="${nodeData.data.mac}"
data-parentMac="${nodeData.data.parentMac}"
data-name="${nodeData.data.name}"
data-ip="${nodeData.data.ip}"
data-mac="${nodeData.data.mac}"
data-vendor="${nodeData.data.vendor}"
data-type="${nodeData.data.type}"
data-devIsNetworkNodeDynamic="${nodeData.data.devIsNetworkNodeDynamic}"
data-lastseen="${nodeData.data.lastseen}"
data-firstseen="${nodeData.data.firstseen}"
data-relationship="${nodeData.data.relType}"
data-status="${nodeData.data.status}"
data-present="${nodeData.data.presentLastScan}"
data-alert="${nodeData.data.alertDown}"
data-icon="${nodeData.data.icon}"
>
<div class="netNodeText">
<strong><span>${devicePort} <span class="${badgeConf.cssText}">${deviceIcon}</span></span>
<span class="spanNetworkTree anonymizeDev" style="width:${nodeWidthPx-50}px">${nodeData.data.name}</span>
${networkHardwareIcon}
</strong>
</div>
</div>
${collapseExpandHtml}`;
},
mainAxisNodeSpacing: 'auto',
// secondaryAxisNodeSpacing: 0.3,
nodeHeight: nodeHeightPx,
nodeWidth: nodeWidthPx,
marginTop: '5',
isHorizontal : true,
hasZoom: true,
hasPan: true,
marginLeft: '10',
marginRight: '10',
idKey: "mac",
hasFlatData: false,
relationnalField: "children",
linkWidth: (nodeData) => 2,
linkColor: (nodeData) => {
relConf = getRelationshipConf(nodeData.data.relType)
return relConf.color;
}
// onNodeClick: (nodeData) => handleNodeClick(nodeData),
});
console.log(deviceListGlobal);
myTree.refresh(myHierarchy);
// hide spinning icon
hideSpinner()
} }
@@ -839,11 +853,11 @@ function initTab()
selectedTab = "Internet_id" selectedTab = "Internet_id"
// the #target from the url // the #target from the url
target = getQueryString('mac') target = getQueryString('mac')
// update cookie if target specified // update cookie if target specified
if(target != "") if(target != "")
{ {
setCache(key, target.replaceAll(":","_")+'_id') // _id is added so it doesn't conflict with AdminLTE tab behavior setCache(key, target.replaceAll(":","_")+'_id') // _id is added so it doesn't conflict with AdminLTE tab behavior
} }
@@ -860,12 +874,12 @@ function initTab()
$('a[data-toggle="tab"]').on('shown.bs.tab', function (e) { $('a[data-toggle="tab"]').on('shown.bs.tab', function (e) {
setCache(key, $(e.target).attr('id')) setCache(key, $(e.target).attr('id'))
}); });
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
function initSelectedNodeHighlighting() function initSelectedNodeHighlighting()
{ {
var currentNodeMac = $(".networkNodeTabHeaders.active a").data("mytabmac"); var currentNodeMac = $(".networkNodeTabHeaders.active a").data("mytabmac");
@@ -882,7 +896,7 @@ function initSelectedNodeHighlighting()
newSelNode = $("#networkTree div[data-mac='"+currentNodeMac+"']")[0] newSelNode = $("#networkTree div[data-mac='"+currentNodeMac+"']")[0]
console.log(newSelNode) console.log(newSelNode)
$(newSelNode).attr('class', $(newSelNode).attr('class') + ' highlightedNode') $(newSelNode).attr('class', $(newSelNode).attr('class') + ' highlightedNode')
} }
@@ -913,7 +927,7 @@ function updateLeaf(leafMac, action) {
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// showing icons or device names in tabs depending on available screen size // showing icons or device names in tabs depending on available screen size
function checkTabsOverflow() { function checkTabsOverflow() {
const $ul = $('.nav-tabs'); const $ul = $('.nav-tabs');
const $lis = $ul.find('li'); const $lis = $ul.find('li');

4
front/php/templates/language/fr_fr.json Executable file → Normal file
View File

@@ -311,7 +311,7 @@
"Gen_Filter": "Filtrer", "Gen_Filter": "Filtrer",
"Gen_Generate": "Générer", "Gen_Generate": "Générer",
"Gen_InvalidMac": "Adresse MAC invalide.", "Gen_InvalidMac": "Adresse MAC invalide.",
"Gen_Invalid_Value": "", "Gen_Invalid_Value": "Une valeur invalide a été renseignée",
"Gen_LockedDB": "Erreur - La base de données est peut-être verrouillée - Vérifier avec les outils de dév via F12 -> Console ou essayer plus tard.", "Gen_LockedDB": "Erreur - La base de données est peut-être verrouillée - Vérifier avec les outils de dév via F12 -> Console ou essayer plus tard.",
"Gen_NetworkMask": "Masque réseau", "Gen_NetworkMask": "Masque réseau",
"Gen_Offline": "Hors ligne", "Gen_Offline": "Hors ligne",
@@ -762,4 +762,4 @@
"settings_system_label": "Système", "settings_system_label": "Système",
"settings_update_item_warning": "Mettre à jour la valeur ci-dessous. Veillez à bien suivre le même format qu'auparavant. <b>Il n'y a pas de pas de contrôle.</b>", "settings_update_item_warning": "Mettre à jour la valeur ci-dessous. Veillez à bien suivre le même format qu'auparavant. <b>Il n'y a pas de pas de contrôle.</b>",
"test_event_tooltip": "Enregistrer d'abord vos modifications avant de tester vôtre paramétrage." "test_event_tooltip": "Enregistrer d'abord vos modifications avant de tester vôtre paramétrage."
} }

File diff suppressed because it is too large Load Diff

View File

@@ -311,7 +311,7 @@
"Gen_Filter": "Фильтр", "Gen_Filter": "Фильтр",
"Gen_Generate": "Генерировать", "Gen_Generate": "Генерировать",
"Gen_InvalidMac": "Неверный Mac-адрес.", "Gen_InvalidMac": "Неверный Mac-адрес.",
"Gen_Invalid_Value": "", "Gen_Invalid_Value": "Введено некорректное значение",
"Gen_LockedDB": "ОШИБКА - Возможно, база данных заблокирована. Проверьте инструменты разработчика F12 -> Консоль или повторите попытку позже.", "Gen_LockedDB": "ОШИБКА - Возможно, база данных заблокирована. Проверьте инструменты разработчика F12 -> Консоль или повторите попытку позже.",
"Gen_NetworkMask": "Маска сети", "Gen_NetworkMask": "Маска сети",
"Gen_Offline": "Оффлайн", "Gen_Offline": "Оффлайн",
@@ -762,4 +762,4 @@
"settings_system_label": "Система", "settings_system_label": "Система",
"settings_update_item_warning": "Обновить значение ниже. Будьте осторожны, следуя предыдущему формату. <b>Проверка не выполняется.</b>", "settings_update_item_warning": "Обновить значение ниже. Будьте осторожны, следуя предыдущему формату. <b>Проверка не выполняется.</b>",
"test_event_tooltip": "Сначала сохраните изменения, прежде чем проверять настройки." "test_event_tooltip": "Сначала сохраните изменения, прежде чем проверять настройки."
} }

4
front/php/templates/language/uk_ua.json Executable file → Normal file
View File

@@ -311,7 +311,7 @@
"Gen_Filter": "Фільтр", "Gen_Filter": "Фільтр",
"Gen_Generate": "Генерувати", "Gen_Generate": "Генерувати",
"Gen_InvalidMac": "Недійсна Mac-адреса.", "Gen_InvalidMac": "Недійсна Mac-адреса.",
"Gen_Invalid_Value": "", "Gen_Invalid_Value": "Введено недійсне значення",
"Gen_LockedDB": "ПОМИЛКА БД може бути заблоковано перевірте F12 Інструменти розробника -> Консоль або спробуйте пізніше.", "Gen_LockedDB": "ПОМИЛКА БД може бути заблоковано перевірте F12 Інструменти розробника -> Консоль або спробуйте пізніше.",
"Gen_NetworkMask": "Маска мережі", "Gen_NetworkMask": "Маска мережі",
"Gen_Offline": "Офлайн", "Gen_Offline": "Офлайн",
@@ -762,4 +762,4 @@
"settings_system_label": "Система", "settings_system_label": "Система",
"settings_update_item_warning": "Оновіть значення нижче. Слідкуйте за попереднім форматом. <b>Перевірка не виконана.</b>", "settings_update_item_warning": "Оновіть значення нижче. Слідкуйте за попереднім форматом. <b>Перевірка не виконана.</b>",
"test_event_tooltip": "Перш ніж перевіряти налаштування, збережіть зміни." "test_event_tooltip": "Перш ніж перевіряти налаштування, збережіть зміни."
} }

View File

@@ -12,7 +12,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression] from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression] from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression] from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression] from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression] import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression] from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -98,9 +97,7 @@ def main():
{"devMac": "00:11:22:33:44:57", "devLastIP": "192.168.1.82"}, {"devMac": "00:11:22:33:44:57", "devLastIP": "192.168.1.82"},
] ]
else: else:
db = DB() device_handler = DeviceInstance()
db.open()
device_handler = DeviceInstance(db)
devices = ( devices = (
device_handler.getAll() device_handler.getAll()
if get_setting_value("REFRESH_FQDN") if get_setting_value("REFRESH_FQDN")

View File

@@ -11,7 +11,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression] from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression] from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression] from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression] from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression] import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression] from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -38,15 +37,11 @@ def main():
timeout = get_setting_value('DIGSCAN_RUN_TIMEOUT') timeout = get_setting_value('DIGSCAN_RUN_TIMEOUT')
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file # Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE) plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance # Create a DeviceInstance instance
device_handler = DeviceInstance(db) device_handler = DeviceInstance()
# Retrieve devices # Retrieve devices
if get_setting_value("REFRESH_FQDN"): if get_setting_value("REFRESH_FQDN"):

View File

@@ -15,7 +15,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression] from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression] from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression] from const import logPath # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression] from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression] import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression] from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -41,15 +40,11 @@ def main():
args = get_setting_value('ICMP_ARGS') args = get_setting_value('ICMP_ARGS')
in_regex = get_setting_value('ICMP_IN_REGEX') in_regex = get_setting_value('ICMP_IN_REGEX')
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file # Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE) plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance # Create a DeviceInstance instance
device_handler = DeviceInstance(db) device_handler = DeviceInstance()
# Retrieve devices # Retrieve devices
all_devices = device_handler.getAll() all_devices = device_handler.getAll()

View File

@@ -12,7 +12,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression] from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression] from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression] from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression] from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression] import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression] from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -40,15 +39,11 @@ def main():
# timeout = get_setting_value('NBLOOKUP_RUN_TIMEOUT') # timeout = get_setting_value('NBLOOKUP_RUN_TIMEOUT')
timeout = 20 timeout = 20
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file # Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE) plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance # Create a DeviceInstance instance
device_handler = DeviceInstance(db) device_handler = DeviceInstance()
# Retrieve devices # Retrieve devices
if get_setting_value("REFRESH_FQDN"): if get_setting_value("REFRESH_FQDN"):

View File

@@ -15,7 +15,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression] from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression] from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression] from const import logPath # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression] from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression] import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression] from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -39,15 +38,11 @@ def main():
timeout = get_setting_value('NSLOOKUP_RUN_TIMEOUT') timeout = get_setting_value('NSLOOKUP_RUN_TIMEOUT')
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file # Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE) plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance # Create a DeviceInstance instance
device_handler = DeviceInstance(db) device_handler = DeviceInstance()
# Retrieve devices # Retrieve devices
if get_setting_value("REFRESH_FQDN"): if get_setting_value("REFRESH_FQDN"):

View File

@@ -256,13 +256,11 @@ def main():
start_time = time.time() start_time = time.time()
mylog("verbose", [f"[{pluginName}] starting execution"]) mylog("verbose", [f"[{pluginName}] starting execution"])
from database import DB
from models.device_instance import DeviceInstance from models.device_instance import DeviceInstance
db = DB() # instance of class DB
db.open()
# Create a DeviceInstance instance # Create a DeviceInstance instance
device_handler = DeviceInstance(db) device_handler = DeviceInstance()
# Retrieve configuration settings # Retrieve configuration settings
# these should be self-explanatory # these should be self-explanatory
omada_sites = [] omada_sites = []

View File

@@ -6,7 +6,7 @@ A plugin for importing devices from an SNMP-enabled router or switch. Using SNMP
Specify the following settings in the Settings section of NetAlertX: Specify the following settings in the Settings section of NetAlertX:
- `SNMPDSC_routers` - A list of `snmpwalk` commands to execute against IP addresses of routers/switches with SNMP turned on. For example: - `SNMPDSC_routers` - A list of `snmpwalk` commands to execute against IP addresses of routers/switches with SNMP turned on. For example:
- `snmpwalk -v 2c -c public -OXsq 192.168.1.1 .1.3.6.1.2.1.3.1.1.2` - `snmpwalk -v 2c -c public -OXsq 192.168.1.1 .1.3.6.1.2.1.3.1.1.2`
- `snmpwalk -v 2c -c public -Oxsq 192.168.1.1 .1.3.6.1.2.1.3.1.1.2` (note: lower case `x`) - `snmpwalk -v 2c -c public -Oxsq 192.168.1.1 .1.3.6.1.2.1.3.1.1.2` (note: lower case `x`)
@@ -14,6 +14,14 @@ Specify the following settings in the Settings section of NetAlertX:
If unsure, please check [snmpwalk examples](https://www.comparitech.com/net-admin/snmpwalk-examples-windows-linux/). If unsure, please check [snmpwalk examples](https://www.comparitech.com/net-admin/snmpwalk-examples-windows-linux/).
Supported output formats:
```
ipNetToMediaPhysAddress[3][192.168.1.9] 6C:6C:6C:6C:6C:b6C1
IP-MIB::ipNetToMediaPhysAddress.17.10.10.3.202 = STRING: f8:81:1a:ef:ef:ef
mib-2.3.1.1.2.15.1.192.168.1.14 "2C F4 32 18 61 43 "
```
### Setup Cisco IOS ### Setup Cisco IOS
Enable IOS SNMP service and restrict to selected (internal) IP/Subnet. Enable IOS SNMP service and restrict to selected (internal) IP/Subnet.

View File

@@ -30,7 +30,7 @@ RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
def main(): def main():
mylog('verbose', ['[SNMPDSC] In script ']) mylog('verbose', f"[{pluginName}] In script ")
# init global variables # init global variables
global snmpWalkCmds global snmpWalkCmds
@@ -57,7 +57,7 @@ def main():
commands = [snmpWalkCmds] commands = [snmpWalkCmds]
for cmd in commands: for cmd in commands:
mylog('verbose', ['[SNMPDSC] Router snmpwalk command: ', cmd]) mylog('verbose', [f"[{pluginName}] Router snmpwalk command: ", cmd])
# split the string, remove white spaces around each item, and exclude any empty strings # split the string, remove white spaces around each item, and exclude any empty strings
snmpwalkArgs = [arg.strip() for arg in cmd.split(' ') if arg.strip()] snmpwalkArgs = [arg.strip() for arg in cmd.split(' ') if arg.strip()]
@@ -72,7 +72,7 @@ def main():
timeout=(timeoutSetting) timeout=(timeoutSetting)
) )
mylog('verbose', ['[SNMPDSC] output: ', output]) mylog('verbose', [f"[{pluginName}] output: ", output])
lines = output.split('\n') lines = output.split('\n')
@@ -80,6 +80,8 @@ def main():
tmpSplt = line.split('"') tmpSplt = line.split('"')
# Expected Format:
# mib-2.3.1.1.2.15.1.192.168.1.14 "2C F4 32 18 61 43 "
if len(tmpSplt) == 3: if len(tmpSplt) == 3:
ipStr = tmpSplt[0].split('.')[-4:] # Get the last 4 elements to extract the IP ipStr = tmpSplt[0].split('.')[-4:] # Get the last 4 elements to extract the IP
@@ -89,7 +91,7 @@ def main():
macAddress = ':'.join(macStr) macAddress = ':'.join(macStr)
ipAddress = '.'.join(ipStr) ipAddress = '.'.join(ipStr)
mylog('verbose', [f'[SNMPDSC] IP: {ipAddress} MAC: {macAddress}']) mylog('verbose', [f"[{pluginName}] IP: {ipAddress} MAC: {macAddress}"])
plugin_objects.add_object( plugin_objects.add_object(
primaryId = handleEmpty(macAddress), primaryId = handleEmpty(macAddress),
@@ -100,8 +102,40 @@ def main():
foreignKey = handleEmpty(macAddress) # Use the primary ID as the foreign key foreignKey = handleEmpty(macAddress) # Use the primary ID as the foreign key
) )
else: else:
mylog('verbose', ['[SNMPDSC] ipStr does not seem to contain a valid IP:', ipStr]) mylog('verbose', [f"[{pluginName}] ipStr does not seem to contain a valid IP:", ipStr])
# Expected Format:
# IP-MIB::ipNetToMediaPhysAddress.17.10.10.3.202 = STRING: f8:81:1a:ef:ef:ef
elif "ipNetToMediaPhysAddress" in line and "=" in line and "STRING:" in line:
# Split on "=" → ["IP-MIB::ipNetToMediaPhysAddress.xxx.xxx.xxx.xxx ", " STRING: aa:bb:cc:dd:ee:ff"]
left, right = line.split("=", 1)
# Extract the MAC (right side)
macAddress = right.split("STRING:")[-1].strip()
macAddress = normalize_mac(macAddress)
# Extract IP address from the left side
# tail of the OID: last 4 integers = IPv4 address
oid_parts = left.strip().split('.')
ip_parts = oid_parts[-4:]
ipAddress = ".".join(ip_parts)
mylog('verbose', [f"[{pluginName}] (fallback) IP: {ipAddress} MAC: {macAddress}"])
plugin_objects.add_object(
primaryId = handleEmpty(macAddress),
secondaryId = handleEmpty(ipAddress),
watched1 = '(unknown)',
watched2 = handleEmpty(snmpwalkArgs[6]),
extra = handleEmpty(line),
foreignKey = handleEmpty(macAddress)
)
continue
# Expected Format:
# ipNetToMediaPhysAddress[3][192.168.1.9] 6C:6C:6C:6C:6C:b6C1
elif line.startswith('ipNetToMediaPhysAddress'): elif line.startswith('ipNetToMediaPhysAddress'):
# Format: snmpwalk -OXsq output # Format: snmpwalk -OXsq output
parts = line.split() parts = line.split()
@@ -110,7 +144,7 @@ def main():
ipAddress = parts[0].split('[')[-1][:-1] ipAddress = parts[0].split('[')[-1][:-1]
macAddress = normalize_mac(parts[1]) macAddress = normalize_mac(parts[1])
mylog('verbose', [f'[SNMPDSC] IP: {ipAddress} MAC: {macAddress}']) mylog('verbose', [f"[{pluginName}] IP: {ipAddress} MAC: {macAddress}"])
plugin_objects.add_object( plugin_objects.add_object(
primaryId = handleEmpty(macAddress), primaryId = handleEmpty(macAddress),
@@ -121,7 +155,7 @@ def main():
foreignKey = handleEmpty(macAddress) foreignKey = handleEmpty(macAddress)
) )
mylog('verbose', ['[SNMPDSC] Entries found: ', len(plugin_objects)]) mylog('verbose', [f"[{pluginName}] Entries found: ", len(plugin_objects)])
plugin_objects.write_result_file() plugin_objects.write_result_file()

View File

@@ -13,7 +13,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression] from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression] from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression] from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression] from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression] import conf # noqa: E402 [flake8 lint suppression]
@@ -44,12 +43,8 @@ def main():
mylog('verbose', [f'[{pluginName}] broadcast_ips value {broadcast_ips}']) mylog('verbose', [f'[{pluginName}] broadcast_ips value {broadcast_ips}'])
# Create a database connection
db = DB() # instance of class DB
db.open()
# Create a DeviceInstance instance # Create a DeviceInstance instance
device_handler = DeviceInstance(db) device_handler = DeviceInstance()
# Retrieve devices # Retrieve devices
if 'offline' in devices_to_wake: if 'offline' in devices_to_wake:

View File

@@ -14,7 +14,7 @@ if ! awk '$2 == "/" && $4 ~ /ro/ {found=1} END {exit !found}' /proc/mounts; then
══════════════════════════════════════════════════════════════════════════════ ══════════════════════════════════════════════════════════════════════════════
⚠️ Warning: Container is running as read-write, not in read-only mode. ⚠️ Warning: Container is running as read-write, not in read-only mode.
Please mount the root filesystem as --read-only or use read-only: true Please mount the root filesystem as --read-only or use read_only: true
https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/read-only-filesystem.md https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/read-only-filesystem.md
══════════════════════════════════════════════════════════════════════════════ ══════════════════════════════════════════════════════════════════════════════
EOF EOF

View File

@@ -30,3 +30,4 @@ urllib3
httplib2 httplib2
gunicorn gunicorn
git+https://github.com/foreign-sub/aiofreepybox.git git+https://github.com/foreign-sub/aiofreepybox.git
mcp

View File

@@ -3,11 +3,13 @@ import sys
import os import os
from flask import Flask, request, jsonify, Response from flask import Flask, request, jsonify, Response
import requests
from models.device_instance import DeviceInstance # noqa: E402
from flask_cors import CORS from flask_cors import CORS
# Register NetAlertX directories # Register NetAlertX directories
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app") INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/server"]) sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from logger import mylog # noqa: E402 [flake8 lint suppression] from logger import mylog # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression] from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
@@ -63,6 +65,12 @@ from .dbquery_endpoint import read_query, write_query, update_query, delete_quer
from .sync_endpoint import handle_sync_post, handle_sync_get # noqa: E402 [flake8 lint suppression] from .sync_endpoint import handle_sync_post, handle_sync_get # noqa: E402 [flake8 lint suppression]
from .logs_endpoint import clean_log # noqa: E402 [flake8 lint suppression] from .logs_endpoint import clean_log # noqa: E402 [flake8 lint suppression]
from models.user_events_queue_instance import UserEventsQueueInstance # noqa: E402 [flake8 lint suppression] from models.user_events_queue_instance import UserEventsQueueInstance # noqa: E402 [flake8 lint suppression]
from models.event_instance import EventInstance # noqa: E402 [flake8 lint suppression]
# Import tool logic from the MCP/tools module to reuse behavior (no blueprints)
from plugin_helper import is_mac # noqa: E402 [flake8 lint suppression]
# is_mac is provided in mcp_endpoint and used by those handlers
# mcp_endpoint contains helper functions; routes moved into this module to keep a single place for routes
from messaging.in_app import ( # noqa: E402 [flake8 lint suppression] from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
write_notification, write_notification,
mark_all_notifications_read, mark_all_notifications_read,
@@ -71,9 +79,17 @@ from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
delete_notification, delete_notification,
mark_notification_as_read mark_notification_as_read
) )
from .mcp_endpoint import ( # noqa: E402 [flake8 lint suppression]
mcp_sse,
mcp_messages,
openapi_spec
)
# tools and mcp routes have been moved into this module (api_server_start)
# Flask application # Flask application
app = Flask(__name__) app = Flask(__name__)
CORS( CORS(
app, app,
resources={ resources={
@@ -87,16 +103,70 @@ CORS(
r"/dbquery/*": {"origins": "*"}, r"/dbquery/*": {"origins": "*"},
r"/messaging/*": {"origins": "*"}, r"/messaging/*": {"origins": "*"},
r"/events/*": {"origins": "*"}, r"/events/*": {"origins": "*"},
r"/logs/*": {"origins": "*"} r"/logs/*": {"origins": "*"},
r"/api/tools/*": {"origins": "*"},
r"/auth/*": {"origins": "*"},
r"/mcp/*": {"origins": "*"}
}, },
supports_credentials=True, supports_credentials=True,
allow_headers=["Authorization", "Content-Type"], allow_headers=["Authorization", "Content-Type"],
) )
# -------------------------------------------------------------------------------
# MCP bridge variables + helpers (moved from mcp_routes)
# -------------------------------------------------------------------------------
mcp_openapi_spec_cache = None
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
API_BASE_URL = f"http://localhost:{BACKEND_PORT}"
def get_openapi_spec_local():
global mcp_openapi_spec_cache
if mcp_openapi_spec_cache:
return mcp_openapi_spec_cache
try:
resp = requests.get(f"{API_BASE_URL}/mcp/openapi.json", timeout=10)
resp.raise_for_status()
mcp_openapi_spec_cache = resp.json()
return mcp_openapi_spec_cache
except Exception as e:
mylog('minimal', [f"Error fetching OpenAPI spec: {e}"])
return None
@app.route('/mcp/sse', methods=['GET', 'POST'])
def api_mcp_sse():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
return mcp_sse()
@app.route('/api/mcp/messages', methods=['POST'])
def api_mcp_messages():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
return mcp_messages()
# ------------------------------------------------------------------- # -------------------------------------------------------------------
# Custom handler for 404 - Route not found # Custom handler for 404 - Route not found
# ------------------------------------------------------------------- # -------------------------------------------------------------------
@app.before_request
def log_request_info():
"""Log details of every incoming request."""
# Filter out noisy requests if needed, but user asked for drastic logging
mylog("verbose", [f"[HTTP] {request.method} {request.path} from {request.remote_addr}"])
# Filter sensitive headers before logging
safe_headers = {k: v for k, v in request.headers if k.lower() not in ('authorization', 'cookie', 'x-api-key')}
mylog("debug", [f"[HTTP] Headers: {safe_headers}"])
if request.method == "POST":
# Be careful with large bodies, but log first 1000 chars
data = request.get_data(as_text=True)
mylog("debug", [f"[HTTP] Body length: {len(data)} chars"])
@app.errorhandler(404) @app.errorhandler(404)
def not_found(error): def not_found(error):
response = { response = {
@@ -145,11 +215,12 @@ def graphql_endpoint():
return jsonify(response) return jsonify(response)
# Tools endpoints are registered via `mcp_endpoint.tools_bp` blueprint.
# -------------------------- # --------------------------
# Settings Endpoints # Settings Endpoints
# -------------------------- # --------------------------
@app.route("/settings/<setKey>", methods=["GET"]) @app.route("/settings/<setKey>", methods=["GET"])
def api_get_setting(setKey): def api_get_setting(setKey):
if not is_authorized(): if not is_authorized():
@@ -161,8 +232,7 @@ def api_get_setting(setKey):
# -------------------------- # --------------------------
# Device Endpoints # Device Endpoints
# -------------------------- # --------------------------
@app.route('/mcp/sse/device/<mac>', methods=['GET', 'POST'])
@app.route("/device/<mac>", methods=["GET"]) @app.route("/device/<mac>", methods=["GET"])
def api_get_device(mac): def api_get_device(mac):
if not is_authorized(): if not is_authorized():
@@ -228,11 +298,45 @@ def api_update_device_column(mac):
return update_device_column(mac, column_name, column_value) return update_device_column(mac, column_name, column_value)
@app.route('/mcp/sse/device/<mac>/set-alias', methods=['POST'])
@app.route('/device/<mac>/set-alias', methods=['POST'])
def api_device_set_alias(mac):
"""Set the device alias - convenience wrapper around update_device_column."""
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
data = request.get_json() or {}
alias = data.get('alias')
if not alias:
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "alias is required"}), 400
return update_device_column(mac, 'devName', alias)
@app.route('/mcp/sse/device/open_ports', methods=['POST'])
@app.route('/device/open_ports', methods=['POST'])
def api_device_open_ports():
"""Get stored NMAP open ports for a target IP or MAC."""
if not is_authorized():
return jsonify({"success": False, "error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
target = data.get('target')
if not target:
return jsonify({"success": False, "error": "Target (IP or MAC) is required"}), 400
device_handler = DeviceInstance()
# Use DeviceInstance method to get stored open ports
open_ports = device_handler.getOpenPorts(target)
if not open_ports:
return jsonify({"success": False, "error": f"No stored open ports for {target}. Run a scan with `/nettools/trigger-scan`"}), 404
return jsonify({"success": True, "target": target, "open_ports": open_ports})
# -------------------------- # --------------------------
# Devices Collections # Devices Collections
# -------------------------- # --------------------------
@app.route("/devices", methods=["GET"]) @app.route("/devices", methods=["GET"])
def api_get_devices(): def api_get_devices():
if not is_authorized(): if not is_authorized():
@@ -288,6 +392,7 @@ def api_devices_totals():
return devices_totals() return devices_totals()
@app.route('/mcp/sse/devices/by-status', methods=['GET', 'POST'])
@app.route("/devices/by-status", methods=["GET"]) @app.route("/devices/by-status", methods=["GET"])
def api_devices_by_status(): def api_devices_by_status():
if not is_authorized(): if not is_authorized():
@@ -298,15 +403,88 @@ def api_devices_by_status():
return devices_by_status(status) return devices_by_status(status)
@app.route('/mcp/sse/devices/search', methods=['POST'])
@app.route('/devices/search', methods=['POST'])
def api_devices_search():
"""Device search: accepts 'query' in JSON and maps to device info/search."""
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
query = data.get('query')
if not query:
return jsonify({"error": "Missing 'query' parameter"}), 400
if is_mac(query):
device_data = get_device_data(query)
if device_data.status_code == 200:
return jsonify({"success": True, "devices": [device_data.get_json()]})
else:
return jsonify({"success": False, "error": "Device not found"}), 404
# Create fresh DB instance for this thread
device_handler = DeviceInstance()
matches = device_handler.search(query)
if not matches:
return jsonify({"success": False, "error": "No devices found"}), 404
return jsonify({"success": True, "devices": matches})
@app.route('/mcp/sse/devices/latest', methods=['GET'])
@app.route('/devices/latest', methods=['GET'])
def api_devices_latest():
"""Get latest device (most recent) - maps to DeviceInstance.getLatest()."""
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
device_handler = DeviceInstance()
latest = device_handler.getLatest()
if not latest:
return jsonify({"message": "No devices found"}), 404
return jsonify([latest])
@app.route('/mcp/sse/devices/network/topology', methods=['GET'])
@app.route('/devices/network/topology', methods=['GET'])
def api_devices_network_topology():
"""Network topology mapping."""
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
device_handler = DeviceInstance()
result = device_handler.getNetworkTopology()
return jsonify(result)
# -------------------------- # --------------------------
# Net tools # Net tools
# -------------------------- # --------------------------
@app.route('/mcp/sse/nettools/wakeonlan', methods=['POST'])
@app.route("/nettools/wakeonlan", methods=["POST"]) @app.route("/nettools/wakeonlan", methods=["POST"])
def api_wakeonlan(): def api_wakeonlan():
if not is_authorized(): if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403 return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
mac = request.json.get("devMac") data = request.json or {}
mac = data.get("devMac")
ip = data.get("devLastIP") or data.get('ip')
if not mac and ip:
device_handler = DeviceInstance()
dev = device_handler.getByIP(ip)
if not dev or not dev.get('devMac'):
return jsonify({"success": False, "message": "ERROR: Device not found", "error": "MAC not resolved"}), 404
mac = dev.get('devMac')
return wakeonlan(mac) return wakeonlan(mac)
@@ -367,11 +545,42 @@ def api_internet_info():
return internet_info() return internet_info()
@app.route('/mcp/sse/nettools/trigger-scan', methods=['POST'])
@app.route("/nettools/trigger-scan", methods=["GET"])
def api_trigger_scan():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
data = request.get_json(silent=True) or {}
scan_type = data.get('type', 'ARPSCAN')
# Validate scan type
loaded_plugins = get_setting_value('LOADED_PLUGINS')
if scan_type not in loaded_plugins:
return jsonify({"success": False, "error": f"Invalid scan type. Must be one of: {', '.join(loaded_plugins)}"}), 400
queue = UserEventsQueueInstance()
action = f"run|{scan_type}"
queue.add_event(action)
return jsonify({"success": True, "message": f"Scan triggered for type: {scan_type}"}), 200
# --------------------------
# MCP Server
# --------------------------
@app.route('/mcp/sse/openapi.json', methods=['GET'])
def api_openapi_spec():
if not is_authorized():
return jsonify({"success": False, "error": "Unauthorized"}), 401
return openapi_spec()
# -------------------------- # --------------------------
# DB query # DB query
# -------------------------- # --------------------------
@app.route("/dbquery/read", methods=["POST"]) @app.route("/dbquery/read", methods=["POST"])
def dbquery_read(): def dbquery_read():
if not is_authorized(): if not is_authorized():
@@ -394,6 +603,7 @@ def dbquery_write():
data = request.get_json() or {} data = request.get_json() or {}
raw_sql_b64 = data.get("rawSql") raw_sql_b64 = data.get("rawSql")
if not raw_sql_b64: if not raw_sql_b64:
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "rawSql is required"}), 400 return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "rawSql is required"}), 400
return write_query(raw_sql_b64) return write_query(raw_sql_b64)
@@ -459,11 +669,13 @@ def api_delete_online_history():
@app.route("/logs", methods=["DELETE"]) @app.route("/logs", methods=["DELETE"])
def api_clean_log(): def api_clean_log():
if not is_authorized(): if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403 return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
file = request.args.get("file") file = request.args.get("file")
if not file: if not file:
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "Missing 'file' query parameter"}), 400 return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "Missing 'file' query parameter"}), 400
return clean_log(file) return clean_log(file)
@@ -498,8 +710,6 @@ def api_add_to_execution_queue():
# -------------------------- # --------------------------
# Device Events # Device Events
# -------------------------- # --------------------------
@app.route("/events/create/<mac>", methods=["POST"]) @app.route("/events/create/<mac>", methods=["POST"])
def api_create_event(mac): def api_create_event(mac):
if not is_authorized(): if not is_authorized():
@@ -563,6 +773,44 @@ def api_get_events_totals():
return get_events_totals(period) return get_events_totals(period)
@app.route('/mcp/sse/events/recent', methods=['GET', 'POST'])
@app.route('/events/recent', methods=['GET'])
def api_events_default_24h():
return api_events_recent(24) # Reuse handler
@app.route('/mcp/sse/events/last', methods=['GET', 'POST'])
@app.route('/events/last', methods=['GET'])
def get_last_events():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
# Create fresh DB instance for this thread
event_handler = EventInstance()
return event_handler.get_last_n(10)
@app.route('/events/<int:hours>', methods=['GET'])
def api_events_recent(hours):
"""Return events from the last <hours> hours using EventInstance."""
if not is_authorized():
return jsonify({"success": False, "error": "Unauthorized"}), 401
# Validate hours input
if hours <= 0:
return jsonify({"success": False, "error": "Hours must be > 0"}), 400
try:
# Create fresh DB instance for this thread
event_handler = EventInstance()
events = event_handler.get_by_hours(hours)
return jsonify({"success": True, "hours": hours, "count": len(events), "events": events}), 200
except Exception as ex:
return jsonify({"success": False, "error": str(ex)}), 500
# -------------------------- # --------------------------
# Sessions # Sessions
# -------------------------- # --------------------------
@@ -744,6 +992,23 @@ def sync_endpoint():
return jsonify({"success": False, "message": "ERROR: No allowed", "error": "Method Not Allowed"}), 405 return jsonify({"success": False, "message": "ERROR: No allowed", "error": "Method Not Allowed"}), 405
# --------------------------
# Auth endpoint
# --------------------------
@app.route("/auth", methods=["GET"])
def check_auth():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
elif request.method == "GET":
return jsonify({"success": True, "message": "Authentication check successful"}), 200
else:
msg = "[sync endpoint] Method Not Allowed"
write_notification(msg, "alert")
mylog("verbose", [msg])
return jsonify({"success": False, "message": "ERROR: No allowed", "error": "Method Not Allowed"}), 405
# -------------------------- # --------------------------
# Background Server Start # Background Server Start
# -------------------------- # --------------------------
@@ -775,3 +1040,9 @@ def start_server(graphql_port, app_state):
# Update the state to indicate the server has started # Update the state to indicate the server has started
app_state = updateState("Process: Idle", None, None, None, 1) app_state = updateState("Process: Idle", None, None, None, 1)
if __name__ == "__main__":
# This block is for running the server directly for testing purposes
# In production, start_server is called from api.py
pass

View File

@@ -228,7 +228,8 @@ def devices_totals():
def devices_by_status(status=None): def devices_by_status(status=None):
""" """
Return devices filtered by status. Return devices filtered by status. Returns all if no status provided.
Possible statuses: my, connected, favorites, new, down, archived
""" """
conn = get_temp_db_connection() conn = get_temp_db_connection()

View File

@@ -0,0 +1,204 @@
#!/usr/bin/env python
import threading
from flask import Blueprint, request, jsonify, Response, stream_with_context
from helper import get_setting_value
from helper import mylog
# from .events_endpoint import get_events # will import locally where needed
import requests
import json
import uuid
import queue
# Blueprints
mcp_bp = Blueprint('mcp', __name__)
tools_bp = Blueprint('tools', __name__)
mcp_sessions = {}
mcp_sessions_lock = threading.Lock()
def check_auth():
token = request.headers.get("Authorization")
expected_token = f"Bearer {get_setting_value('API_TOKEN')}"
return token == expected_token
# --------------------------
# Specs
# --------------------------
def openapi_spec():
# Spec matching actual available routes for MCP tools
mylog("verbose", ["[MCP] OpenAPI spec requested"])
spec = {
"openapi": "3.0.0",
"info": {"title": "NetAlertX Tools", "version": "1.1.0"},
"servers": [{"url": "/"}],
"paths": {
"/devices/by-status": {"post": {"operationId": "list_devices"}},
"/device/{mac}": {"post": {"operationId": "get_device_info"}},
"/devices/search": {"post": {"operationId": "search_devices"}},
"/devices/latest": {"get": {"operationId": "get_latest_device"}},
"/nettools/trigger-scan": {"post": {"operationId": "trigger_scan"}},
"/device/open_ports": {"post": {"operationId": "get_open_ports"}},
"/devices/network/topology": {"get": {"operationId": "get_network_topology"}},
"/events/recent": {"get": {"operationId": "get_recent_alerts"}, "post": {"operationId": "get_recent_alerts"}},
"/events/last": {"get": {"operationId": "get_last_events"}, "post": {"operationId": "get_last_events"}},
"/device/{mac}/set-alias": {"post": {"operationId": "set_device_alias"}},
"/nettools/wakeonlan": {"post": {"operationId": "wol_wake_device"}}
}
}
return jsonify(spec)
# --------------------------
# MCP SSE/JSON-RPC Endpoint
# --------------------------
# Sessions for SSE
_openapi_spec_cache = None
API_BASE_URL = f"http://localhost:{get_setting_value('GRAPHQL_PORT')}"
def get_openapi_spec():
global _openapi_spec_cache
if _openapi_spec_cache:
return _openapi_spec_cache
try:
r = requests.get(f"{API_BASE_URL}/mcp/openapi.json", timeout=10)
r.raise_for_status()
_openapi_spec_cache = r.json()
return _openapi_spec_cache
except Exception:
return None
def map_openapi_to_mcp_tools(spec):
tools = []
if not spec or 'paths' not in spec:
return tools
for path, methods in spec['paths'].items():
for method, details in methods.items():
if 'operationId' in details:
tool = {'name': details['operationId'], 'description': details.get('description', ''), 'inputSchema': {'type': 'object', 'properties': {}, 'required': []}}
if 'requestBody' in details:
content = details['requestBody'].get('content', {})
if 'application/json' in content:
schema = content['application/json'].get('schema', {})
tool['inputSchema'] = schema.copy()
if 'parameters' in details:
for param in details['parameters']:
if param.get('in') == 'query':
tool['inputSchema']['properties'][param['name']] = {'type': param.get('schema', {}).get('type', 'string'), 'description': param.get('description', '')}
if param.get('required'):
tool['inputSchema']['required'].append(param['name'])
tools.append(tool)
return tools
def process_mcp_request(data):
method = data.get('method')
msg_id = data.get('id')
if method == 'initialize':
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'protocolVersion': '2024-11-05', 'capabilities': {'tools': {}}, 'serverInfo': {'name': 'NetAlertX', 'version': '1.0.0'}}}
if method == 'notifications/initialized':
return None
if method == 'tools/list':
spec = get_openapi_spec()
tools = map_openapi_to_mcp_tools(spec)
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'tools': tools}}
if method == 'tools/call':
params = data.get('params', {})
tool_name = params.get('name')
tool_args = params.get('arguments', {})
spec = get_openapi_spec()
target_path = None
target_method = None
if spec and 'paths' in spec:
for path, methods in spec['paths'].items():
for m, details in methods.items():
if details.get('operationId') == tool_name:
target_path = path
target_method = m.upper()
break
if target_path:
break
if not target_path:
return {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': f"Tool {tool_name} not found"}}
try:
headers = {'Content-Type': 'application/json'}
if 'Authorization' in request.headers:
headers['Authorization'] = request.headers['Authorization']
url = f"{API_BASE_URL}{target_path}"
if target_method == 'POST':
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
else:
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
content = []
try:
json_content = api_res.json()
content.append({'type': 'text', 'text': json.dumps(json_content, indent=2)})
except Exception:
content.append({'type': 'text', 'text': api_res.text})
is_error = api_res.status_code >= 400
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': content, 'isError': is_error}}
except Exception as e:
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': [{'type': 'text', 'text': f"Error calling tool: {str(e)}"}], 'isError': True}}
if method == 'ping':
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {}}
if msg_id:
return {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': 'Method not found'}}
def mcp_messages():
session_id = request.args.get('session_id')
if not session_id:
return jsonify({"error": "Missing session_id"}), 400
with mcp_sessions_lock:
if session_id not in mcp_sessions:
return jsonify({"error": "Session not found"}), 404
q = mcp_sessions[session_id]
data = request.json
if not data:
return jsonify({"error": "Invalid JSON"}), 400
response = process_mcp_request(data)
if response:
q.put(response)
return jsonify({"status": "accepted"}), 202
def mcp_sse():
if request.method == 'POST':
try:
data = request.get_json(silent=True)
if data and 'method' in data and 'jsonrpc' in data:
response = process_mcp_request(data)
if response:
return jsonify(response)
else:
return '', 202
except Exception as e:
mylog("none", f'SSE POST processing error: {e}')
return jsonify({'status': 'ok', 'message': 'MCP SSE endpoint active'}), 200
session_id = uuid.uuid4().hex
q = queue.Queue()
with mcp_sessions_lock:
mcp_sessions[session_id] = q
def stream():
yield f"event: endpoint\ndata: /mcp/messages?session_id={session_id}\n\n"
try:
while True:
try:
message = q.get(timeout=20)
yield f"event: message\ndata: {json.dumps(message)}\n\n"
except queue.Empty:
yield ": keep-alive\n\n"
except GeneratorExit:
with mcp_sessions_lock:
if session_id in mcp_sessions:
del mcp_sessions[session_id]
return Response(stream_with_context(stream()), mimetype='text/event-stream')

View File

@@ -0,0 +1,304 @@
"""MCP bridge routes exposing NetAlertX tool endpoints via JSON-RPC."""
import json
import uuid
import queue
import requests
import threading
import logging
from flask import Blueprint, request, Response, stream_with_context, jsonify
from helper import get_setting_value
mcp_bp = Blueprint('mcp', __name__)
# Store active sessions: session_id -> Queue
sessions = {}
sessions_lock = threading.Lock()
# Cache for OpenAPI spec to avoid fetching on every request
openapi_spec_cache = None
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
API_BASE_URL = f"http://localhost:{BACKEND_PORT}/api/tools"
def get_openapi_spec():
"""Fetch and cache the tools OpenAPI specification from the local API server."""
global openapi_spec_cache
if openapi_spec_cache:
return openapi_spec_cache
try:
# Fetch from local server
# We use localhost because this code runs on the server
response = requests.get(f"{API_BASE_URL}/openapi.json", timeout=10)
response.raise_for_status()
openapi_spec_cache = response.json()
return openapi_spec_cache
except Exception as e:
print(f"Error fetching OpenAPI spec: {e}")
return None
def map_openapi_to_mcp_tools(spec):
"""Convert OpenAPI paths into MCP tool descriptors."""
tools = []
if not spec or "paths" not in spec:
return tools
for path, methods in spec["paths"].items():
for method, details in methods.items():
if "operationId" in details:
tool = {
"name": details["operationId"],
"description": details.get("description", details.get("summary", "")),
"inputSchema": {
"type": "object",
"properties": {},
"required": []
}
}
# Extract parameters from requestBody if present
if "requestBody" in details:
content = details["requestBody"].get("content", {})
if "application/json" in content:
schema = content["application/json"].get("schema", {})
tool["inputSchema"] = schema.copy()
if "properties" not in tool["inputSchema"]:
tool["inputSchema"]["properties"] = {}
if "required" not in tool["inputSchema"]:
tool["inputSchema"]["required"] = []
# Extract parameters from 'parameters' list (query/path params) - simplistic support
if "parameters" in details:
for param in details["parameters"]:
if param.get("in") == "query":
tool["inputSchema"]["properties"][param["name"]] = {
"type": param.get("schema", {}).get("type", "string"),
"description": param.get("description", "")
}
if param.get("required"):
if "required" not in tool["inputSchema"]:
tool["inputSchema"]["required"] = []
tool["inputSchema"]["required"].append(param["name"])
tools.append(tool)
return tools
def process_mcp_request(data):
"""Handle incoming MCP JSON-RPC requests and route them to tools."""
method = data.get("method")
msg_id = data.get("id")
response = None
if method == "initialize":
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {}
},
"serverInfo": {
"name": "NetAlertX",
"version": "1.0.0"
}
}
}
elif method == "notifications/initialized":
# No response needed for notification
pass
elif method == "tools/list":
spec = get_openapi_spec()
tools = map_openapi_to_mcp_tools(spec)
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"tools": tools
}
}
elif method == "tools/call":
params = data.get("params", {})
tool_name = params.get("name")
tool_args = params.get("arguments", {})
# Find the endpoint for this tool
spec = get_openapi_spec()
target_path = None
target_method = None
if spec and "paths" in spec:
for path, methods in spec["paths"].items():
for m, details in methods.items():
if details.get("operationId") == tool_name:
target_path = path
target_method = m.upper()
break
if target_path:
break
if target_path:
try:
# Make the request to the local API
# We forward the Authorization header from the incoming request if present
headers = {
"Content-Type": "application/json"
}
if "Authorization" in request.headers:
headers["Authorization"] = request.headers["Authorization"]
url = f"{API_BASE_URL}{target_path}"
if target_method == "POST":
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
elif target_method == "GET":
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
else:
api_res = None
if api_res:
content = []
try:
json_content = api_res.json()
content.append({
"type": "text",
"text": json.dumps(json_content, indent=2)
})
except (ValueError, json.JSONDecodeError):
content.append({
"type": "text",
"text": api_res.text
})
is_error = api_res.status_code >= 400
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"content": content,
"isError": is_error
}
}
else:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": f"Method {target_method} not supported"}
}
except Exception as e:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"content": [{"type": "text", "text": f"Error calling tool: {str(e)}"}],
"isError": True
}
}
else:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": f"Tool {tool_name} not found"}
}
elif method == "ping":
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {}
}
else:
# Unknown method
if msg_id: # Only respond if it's a request (has id)
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": "Method not found"}
}
return response
@mcp_bp.route('/sse', methods=['GET', 'POST'])
def handle_sse():
"""Expose an SSE endpoint that streams MCP responses to connected clients."""
if request.method == 'POST':
# Handle verification or keep-alive pings
try:
data = request.get_json(silent=True)
if data and "method" in data and "jsonrpc" in data:
response = process_mcp_request(data)
if response:
return jsonify(response)
else:
# Notification or no response needed
return "", 202
except Exception as e:
# Log but don't fail - malformed requests shouldn't crash the endpoint
logging.getLogger(__name__).debug(f"SSE POST processing error: {e}")
return jsonify({"status": "ok", "message": "MCP SSE endpoint active"}), 200
session_id = uuid.uuid4().hex
q = queue.Queue()
with sessions_lock:
sessions[session_id] = q
def stream():
"""Yield SSE messages for queued MCP responses until the client disconnects."""
# Send the endpoint event
# The client should POST to /api/mcp/messages?session_id=<session_id>
yield f"event: endpoint\ndata: /api/mcp/messages?session_id={session_id}\n\n"
try:
while True:
try:
# Wait for messages
message = q.get(timeout=20) # Keep-alive timeout
yield f"event: message\ndata: {json.dumps(message)}\n\n"
except queue.Empty:
# Send keep-alive comment
yield ": keep-alive\n\n"
except GeneratorExit:
with sessions_lock:
if session_id in sessions:
del sessions[session_id]
return Response(stream_with_context(stream()), mimetype='text/event-stream')
@mcp_bp.route('/messages', methods=['POST'])
def handle_messages():
"""Receive MCP JSON-RPC messages and enqueue responses for an SSE session."""
session_id = request.args.get('session_id')
if not session_id:
return jsonify({"error": "Missing session_id"}), 400
with sessions_lock:
if session_id not in sessions:
return jsonify({"error": "Session not found"}), 404
q = sessions[session_id]
data = request.json
if not data:
return jsonify({"error": "Invalid JSON"}), 400
response = process_mcp_request(data)
if response:
q.put(response)
return jsonify({"status": "accepted"}), 202

View File

@@ -1,83 +1,134 @@
from front.plugins.plugin_helper import is_mac
from logger import mylog from logger import mylog
from models.plugin_object_instance import PluginObjectInstance
from database import get_temp_db_connection
# -------------------------------------------------------------------------------
# Device object handling (WIP)
# -------------------------------------------------------------------------------
class DeviceInstance: class DeviceInstance:
def __init__(self, db):
self.db = db
# Get all # --- helpers --------------------------------------------------------------
def _fetchall(self, query, params=()):
conn = get_temp_db_connection()
rows = conn.execute(query, params).fetchall()
conn.close()
return [dict(r) for r in rows]
def _fetchone(self, query, params=()):
conn = get_temp_db_connection()
row = conn.execute(query, params).fetchone()
conn.close()
return dict(row) if row else None
def _execute(self, query, params=()):
conn = get_temp_db_connection()
cur = conn.cursor()
cur.execute(query, params)
conn.commit()
conn.close()
# --- public API -----------------------------------------------------------
def getAll(self): def getAll(self):
self.db.sql.execute(""" return self._fetchall("SELECT * FROM Devices")
SELECT * FROM Devices
""")
return self.db.sql.fetchall()
# Get all with unknown names
def getUnknown(self): def getUnknown(self):
self.db.sql.execute(""" return self._fetchall("""
SELECT * FROM Devices WHERE devName in ("(unknown)", "(name not found)", "" ) SELECT * FROM Devices
WHERE devName IN ("(unknown)", "(name not found)", "")
""") """)
return self.db.sql.fetchall()
# Get specific column value based on devMac
def getValueWithMac(self, column_name, devMac): def getValueWithMac(self, column_name, devMac):
query = f"SELECT {column_name} FROM Devices WHERE devMac = ?" row = self._fetchone(f"""
self.db.sql.execute(query, (devMac,)) SELECT {column_name} FROM Devices WHERE devMac = ?
result = self.db.sql.fetchone() """, (devMac,))
return result[column_name] if result else None return row.get(column_name) if row else None
# Get all down
def getDown(self): def getDown(self):
self.db.sql.execute(""" return self._fetchall("""
SELECT * FROM Devices WHERE devAlertDown = 1 and devPresentLastScan = 0 SELECT * FROM Devices
WHERE devAlertDown = 1 AND devPresentLastScan = 0
""") """)
return self.db.sql.fetchall()
# Get all down
def getOffline(self): def getOffline(self):
self.db.sql.execute(""" return self._fetchall("""
SELECT * FROM Devices WHERE devPresentLastScan = 0 SELECT * FROM Devices
WHERE devPresentLastScan = 0
""") """)
return self.db.sql.fetchall()
# Get a device by devGUID
def getByGUID(self, devGUID): def getByGUID(self, devGUID):
self.db.sql.execute("SELECT * FROM Devices WHERE devGUID = ?", (devGUID,)) return self._fetchone("""
result = self.db.sql.fetchone() SELECT * FROM Devices WHERE devGUID = ?
return dict(result) if result else None """, (devGUID,))
# Check if a device exists by devGUID
def exists(self, devGUID): def exists(self, devGUID):
self.db.sql.execute( row = self._fetchone("""
"SELECT COUNT(*) AS count FROM Devices WHERE devGUID = ?", (devGUID,) SELECT COUNT(*) as count FROM Devices WHERE devGUID = ?
) """, (devGUID,))
result = self.db.sql.fetchone() return row['count'] > 0 if row else False
return result["count"] > 0
def getByIP(self, ip):
return self._fetchone("""
SELECT * FROM Devices WHERE devLastIP = ?
""", (ip,))
def search(self, query):
like = f"%{query}%"
return self._fetchall("""
SELECT * FROM Devices
WHERE devMac LIKE ? OR devName LIKE ? OR devLastIP LIKE ?
""", (like, like, like))
def getLatest(self):
return self._fetchone("""
SELECT * FROM Devices
ORDER BY devFirstConnection DESC LIMIT 1
""")
def getNetworkTopology(self):
rows = self._fetchall("""
SELECT devName, devMac, devParentMAC, devParentPort, devVendor FROM Devices
""")
nodes = [{"id": r["devMac"], "name": r["devName"], "vendor": r["devVendor"]} for r in rows]
links = [{"source": r["devParentMAC"], "target": r["devMac"], "port": r["devParentPort"]}
for r in rows if r["devParentMAC"]]
return {"nodes": nodes, "links": links}
# Update a specific field for a device
def updateField(self, devGUID, field, value): def updateField(self, devGUID, field, value):
if not self.exists(devGUID): if not self.exists(devGUID):
m = f"[Device] In 'updateField': GUID {devGUID} not found." msg = f"[Device] updateField: GUID {devGUID} not found"
mylog("none", m) mylog("none", msg)
raise ValueError(m) raise ValueError(msg)
self._execute(f"UPDATE Devices SET {field}=? WHERE devGUID=?", (value, devGUID))
self.db.sql.execute(
f"""
UPDATE Devices SET {field} = ? WHERE devGUID = ?
""",
(value, devGUID),
)
self.db.commitDB()
# Delete a device by devGUID
def delete(self, devGUID): def delete(self, devGUID):
if not self.exists(devGUID): if not self.exists(devGUID):
m = f"[Device] In 'delete': GUID {devGUID} not found." msg = f"[Device] delete: GUID {devGUID} not found"
mylog("none", m) mylog("none", msg)
raise ValueError(m) raise ValueError(msg)
self._execute("DELETE FROM Devices WHERE devGUID=?", (devGUID,))
self.db.sql.execute("DELETE FROM Devices WHERE devGUID = ?", (devGUID,)) def resolvePrimaryID(self, target):
self.db.commitDB() if is_mac(target):
return target.lower()
dev = self.getByIP(target)
return dev['devMac'].lower() if dev else None
def getOpenPorts(self, target):
primary = self.resolvePrimaryID(target)
if not primary:
return []
objs = PluginObjectInstance().getByField(
plugPrefix='NMAP',
matchedColumn='Object_PrimaryID',
matchedKey=primary,
returnFields=['Object_SecondaryID', 'Watched_Value2']
)
ports = []
for o in objs:
port = int(o.get('Object_SecondaryID') or 0)
ports.append({"port": port, "service": o.get('Watched_Value2', '')})
return ports

View File

@@ -0,0 +1,107 @@
from datetime import datetime, timedelta
from logger import mylog
from database import get_temp_db_connection
# -------------------------------------------------------------------------------
# Event handling (Matches table: Events)
# -------------------------------------------------------------------------------
class EventInstance:
def _conn(self):
"""Always return a new DB connection (thread-safe)."""
return get_temp_db_connection()
def _rows_to_list(self, rows):
return [dict(r) for r in rows]
# Get all events
def get_all(self):
conn = self._conn()
rows = conn.execute(
"SELECT * FROM Events ORDER BY eve_DateTime DESC"
).fetchall()
conn.close()
return self._rows_to_list(rows)
# --- Get last n events ---
def get_last_n(self, n=10):
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
ORDER BY eve_DateTime DESC
LIMIT ?
""", (n,)).fetchall()
conn.close()
return self._rows_to_list(rows)
# --- Specific helper for last 10 ---
def get_last(self):
return self.get_last_n(10)
# Get events in the last 24h
def get_recent(self):
since = datetime.now() - timedelta(hours=24)
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
WHERE eve_DateTime >= ?
ORDER BY eve_DateTime DESC
""", (since,)).fetchall()
conn.close()
return self._rows_to_list(rows)
# Get events from last N hours
def get_by_hours(self, hours: int):
if hours <= 0:
mylog("warn", f"[Events] get_by_hours({hours}) -> invalid value")
return []
since = datetime.now() - timedelta(hours=hours)
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
WHERE eve_DateTime >= ?
ORDER BY eve_DateTime DESC
""", (since,)).fetchall()
conn.close()
return self._rows_to_list(rows)
# Get events in a date range
def get_by_range(self, start: datetime, end: datetime):
if end < start:
mylog("error", f"[Events] get_by_range invalid: {start} > {end}")
raise ValueError("Start must not be after end")
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
WHERE eve_DateTime BETWEEN ? AND ?
ORDER BY eve_DateTime DESC
""", (start, end)).fetchall()
conn.close()
return self._rows_to_list(rows)
# Insert new event
def add(self, mac, ip, eventType, info="", pendingAlert=True, pairRow=None):
conn = self._conn()
conn.execute("""
INSERT INTO Events (
eve_MAC, eve_IP, eve_DateTime,
eve_EventType, eve_AdditionalInfo,
eve_PendingAlertEmail, eve_PairEventRowid
) VALUES (?,?,?,?,?,?,?)
""", (mac, ip, datetime.now(), eventType, info,
1 if pendingAlert else 0, pairRow))
conn.commit()
conn.close()
# Delete old events
def delete_older_than(self, days: int):
cutoff = datetime.now() - timedelta(days=days)
conn = self._conn()
result = conn.execute("DELETE FROM Events WHERE eve_DateTime < ?", (cutoff,))
conn.commit()
deleted_count = result.rowcount
conn.close()
return deleted_count

View File

@@ -1,70 +1,91 @@
from logger import mylog from logger import mylog
from database import get_temp_db_connection
# ------------------------------------------------------------------------------- # -------------------------------------------------------------------------------
# Plugin object handling (WIP) # Plugin object handling (THREAD-SAFE REWRITE)
# ------------------------------------------------------------------------------- # -------------------------------------------------------------------------------
class PluginObjectInstance: class PluginObjectInstance:
def __init__(self, db):
self.db = db
# Get all plugin objects # -------------- Internal DB helper wrappers --------------------------------
def _fetchall(self, query, params=()):
conn = get_temp_db_connection()
rows = conn.execute(query, params).fetchall()
conn.close()
return [dict(r) for r in rows]
def _fetchone(self, query, params=()):
conn = get_temp_db_connection()
row = conn.execute(query, params).fetchone()
conn.close()
return dict(row) if row else None
def _execute(self, query, params=()):
conn = get_temp_db_connection()
conn.execute(query, params)
conn.commit()
conn.close()
# ---------------------------------------------------------------------------
# Public API — identical behaviour, now thread-safe + self-contained
# ---------------------------------------------------------------------------
def getAll(self): def getAll(self):
self.db.sql.execute(""" return self._fetchall("SELECT * FROM Plugins_Objects")
SELECT * FROM Plugins_Objects
""")
return self.db.sql.fetchall()
# Get plugin object by ObjectGUID
def getByGUID(self, ObjectGUID): def getByGUID(self, ObjectGUID):
self.db.sql.execute( return self._fetchone(
"SELECT * FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,) "SELECT * FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,)
) )
result = self.db.sql.fetchone()
return dict(result) if result else None
# Check if a plugin object exists by ObjectGUID
def exists(self, ObjectGUID): def exists(self, ObjectGUID):
self.db.sql.execute( row = self._fetchone("""
"SELECT COUNT(*) AS count FROM Plugins_Objects WHERE ObjectGUID = ?", SELECT COUNT(*) AS count FROM Plugins_Objects WHERE ObjectGUID = ?
(ObjectGUID,), """, (ObjectGUID,))
) return row["count"] > 0 if row else False
result = self.db.sql.fetchone()
return result["count"] > 0
# Get objects by plugin name
def getByPlugin(self, plugin): def getByPlugin(self, plugin):
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,)) return self._fetchall(
return self.db.sql.fetchall() "SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,)
)
def getByField(self, plugPrefix, matchedColumn, matchedKey, returnFields=None):
rows = self._fetchall(
f"SELECT * FROM Plugins_Objects WHERE Plugin = ? AND {matchedColumn} = ?",
(plugPrefix, matchedKey.lower())
)
if not returnFields:
return rows
return [{f: row.get(f) for f in returnFields} for row in rows]
def getByPrimary(self, plugin, primary_id):
return self._fetchall("""
SELECT * FROM Plugins_Objects
WHERE Plugin = ? AND Object_PrimaryID = ?
""", (plugin, primary_id))
# Get objects by status
def getByStatus(self, status): def getByStatus(self, status):
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Status = ?", (status,)) return self._fetchall("""
return self.db.sql.fetchall() SELECT * FROM Plugins_Objects WHERE Status = ?
""", (status,))
# Update a specific field for a plugin object
def updateField(self, ObjectGUID, field, value): def updateField(self, ObjectGUID, field, value):
if not self.exists(ObjectGUID): if not self.exists(ObjectGUID):
m = f"[PluginObject] In 'updateField': GUID {ObjectGUID} not found." msg = f"[PluginObject] updateField: GUID {ObjectGUID} not found."
mylog("none", m) mylog("none", msg)
raise ValueError(m) raise ValueError(msg)
self.db.sql.execute( self._execute(
f""" f"UPDATE Plugins_Objects SET {field}=? WHERE ObjectGUID=?",
UPDATE Plugins_Objects SET {field} = ? WHERE ObjectGUID = ? (value, ObjectGUID)
""",
(value, ObjectGUID),
) )
self.db.commitDB()
# Delete a plugin object by ObjectGUID
def delete(self, ObjectGUID): def delete(self, ObjectGUID):
if not self.exists(ObjectGUID): if not self.exists(ObjectGUID):
m = f"[PluginObject] In 'delete': GUID {ObjectGUID} not found." msg = f"[PluginObject] delete: GUID {ObjectGUID} not found."
mylog("none", m) mylog("none", msg)
raise ValueError(m) raise ValueError(msg)
self.db.sql.execute( self._execute("DELETE FROM Plugins_Objects WHERE ObjectGUID=?", (ObjectGUID,))
"DELETE FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,)
)
self.db.commitDB()

View File

@@ -650,7 +650,7 @@ def update_devices_names(pm):
sql = pm.db.sql sql = pm.db.sql
resolver = NameResolver(pm.db) resolver = NameResolver(pm.db)
device_handler = DeviceInstance(pm.db) device_handler = DeviceInstance()
nameNotFound = "(name not found)" nameNotFound = "(name not found)"

View File

@@ -42,13 +42,13 @@ class UpdateFieldAction(Action):
# currently unused # currently unused
if isinstance(obj, dict) and "ObjectGUID" in obj: if isinstance(obj, dict) and "ObjectGUID" in obj:
mylog("debug", f"[WF] Updating Object '{obj}' ") mylog("debug", f"[WF] Updating Object '{obj}' ")
plugin_instance = PluginObjectInstance(self.db) plugin_instance = PluginObjectInstance()
plugin_instance.updateField(obj["ObjectGUID"], self.field, self.value) plugin_instance.updateField(obj["ObjectGUID"], self.field, self.value)
processed = True processed = True
elif isinstance(obj, dict) and "devGUID" in obj: elif isinstance(obj, dict) and "devGUID" in obj:
mylog("debug", f"[WF] Updating Device '{obj}' ") mylog("debug", f"[WF] Updating Device '{obj}' ")
device_instance = DeviceInstance(self.db) device_instance = DeviceInstance()
device_instance.updateField(obj["devGUID"], self.field, self.value) device_instance.updateField(obj["devGUID"], self.field, self.value)
processed = True processed = True
@@ -79,13 +79,13 @@ class DeleteObjectAction(Action):
# currently unused # currently unused
if isinstance(obj, dict) and "ObjectGUID" in obj: if isinstance(obj, dict) and "ObjectGUID" in obj:
mylog("debug", f"[WF] Updating Object '{obj}' ") mylog("debug", f"[WF] Updating Object '{obj}' ")
plugin_instance = PluginObjectInstance(self.db) plugin_instance = PluginObjectInstance()
plugin_instance.delete(obj["ObjectGUID"]) plugin_instance.delete(obj["ObjectGUID"])
processed = True processed = True
elif isinstance(obj, dict) and "devGUID" in obj: elif isinstance(obj, dict) and "devGUID" in obj:
mylog("debug", f"[WF] Updating Device '{obj}' ") mylog("debug", f"[WF] Updating Device '{obj}' ")
device_instance = DeviceInstance(self.db) device_instance = DeviceInstance()
device_instance.delete(obj["devGUID"]) device_instance.delete(obj["devGUID"])
processed = True processed = True

View File

@@ -0,0 +1,66 @@
# tests/test_auth.py
import sys
import os
import pytest
# Register NetAlertX directories
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from helper import get_setting_value # noqa: E402
from api_server.api_server_start import app # noqa: E402
@pytest.fixture(scope="session")
def api_token():
"""Load API token from system settings (same as other tests)."""
return get_setting_value("API_TOKEN")
@pytest.fixture
def client():
"""Flask test client."""
with app.test_client() as client:
yield client
def auth_headers(token):
return {"Authorization": f"Bearer {token}"}
# -------------------------
# AUTH ENDPOINT TESTS
# -------------------------
def test_auth_ok(client, api_token):
"""Valid token should allow access."""
resp = client.get("/auth", headers=auth_headers(api_token))
assert resp.status_code == 200
data = resp.get_json()
assert data is not None
assert data.get("success") is True
assert "successful" in data.get("message", "").lower()
def test_auth_missing_token(client):
"""Missing token should be forbidden."""
resp = client.get("/auth")
assert resp.status_code == 403
data = resp.get_json()
assert data is not None
assert data.get("success") is False
assert "not authorized" in data.get("message", "").lower()
def test_auth_invalid_token(client):
"""Invalid bearer token should be forbidden."""
resp = client.get("/auth", headers=auth_headers("INVALID-TOKEN"))
assert resp.status_code == 403
data = resp.get_json()
assert data is not None
assert data.get("success") is False
assert "not authorized" in data.get("message", "").lower()

View File

@@ -0,0 +1,306 @@
import sys
import os
import pytest
from unittest.mock import patch, MagicMock
from datetime import datetime
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from helper import get_setting_value # noqa: E402
from api_server.api_server_start import app # noqa: E402
@pytest.fixture(scope="session")
def api_token():
return get_setting_value("API_TOKEN")
@pytest.fixture
def client():
with app.test_client() as client:
yield client
def auth_headers(token):
return {"Authorization": f"Bearer {token}"}
# --- Device Search Tests ---
@patch('models.device_instance.get_temp_db_connection')
def test_get_device_info_ip_partial(mock_db_conn, client, api_token):
"""Test device search with partial IP search."""
# Mock database connection - DeviceInstance._fetchall calls conn.execute().fetchall()
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchall.return_value = [
{"devName": "Test Device", "devMac": "AA:BB:CC:DD:EE:FF", "devLastIP": "192.168.1.50"}
]
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
payload = {"query": ".50"}
response = client.post('/devices/search',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert len(data["devices"]) == 1
assert data["devices"][0]["devLastIP"] == "192.168.1.50"
# --- Trigger Scan Tests ---
@patch('api_server.api_server_start.UserEventsQueueInstance')
def test_trigger_scan_ARPSCAN(mock_queue_class, client, api_token):
"""Test trigger_scan with ARPSCAN type."""
mock_queue = MagicMock()
mock_queue_class.return_value = mock_queue
payload = {"type": "ARPSCAN"}
response = client.post('/mcp/sse/nettools/trigger-scan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
mock_queue.add_event.assert_called_once()
call_args = mock_queue.add_event.call_args[0]
assert "run|ARPSCAN" in call_args[0]
@patch('api_server.api_server_start.UserEventsQueueInstance')
def test_trigger_scan_invalid_type(mock_queue_class, client, api_token):
"""Test trigger_scan with invalid scan type."""
mock_queue = MagicMock()
mock_queue_class.return_value = mock_queue
payload = {"type": "invalid_type", "target": "192.168.1.0/24"}
response = client.post('/mcp/sse/nettools/trigger-scan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 400
data = response.get_json()
assert data["success"] is False
# --- get_open_ports Tests ---
@patch('models.plugin_object_instance.get_temp_db_connection')
@patch('models.device_instance.get_temp_db_connection')
def test_get_open_ports_ip(mock_plugin_db_conn, mock_device_db_conn, client, api_token):
"""Test get_open_ports with an IP address."""
# Mock database connections for both device lookup and plugin objects
mock_conn = MagicMock()
mock_execute_result = MagicMock()
# Mock for PluginObjectInstance.getByField (returns port data)
mock_execute_result.fetchall.return_value = [
{"Object_SecondaryID": "22", "Watched_Value2": "ssh"},
{"Object_SecondaryID": "80", "Watched_Value2": "http"}
]
# Mock for DeviceInstance.getByIP (returns device with MAC)
mock_execute_result.fetchone.return_value = {"devMac": "AA:BB:CC:DD:EE:FF"}
mock_conn.execute.return_value = mock_execute_result
mock_plugin_db_conn.return_value = mock_conn
mock_device_db_conn.return_value = mock_conn
payload = {"target": "192.168.1.1"}
response = client.post('/device/open_ports',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert len(data["open_ports"]) == 2
assert data["open_ports"][0]["port"] == 22
assert data["open_ports"][1]["service"] == "http"
@patch('models.plugin_object_instance.get_temp_db_connection')
def test_get_open_ports_mac_resolve(mock_plugin_db_conn, client, api_token):
"""Test get_open_ports with a MAC address that resolves to an IP."""
# Mock database connection for MAC-based open ports query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchall.return_value = [
{"Object_SecondaryID": "80", "Watched_Value2": "http"}
]
mock_conn.execute.return_value = mock_execute_result
mock_plugin_db_conn.return_value = mock_conn
payload = {"target": "AA:BB:CC:DD:EE:FF"}
response = client.post('/device/open_ports',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert "target" in data
assert len(data["open_ports"]) == 1
assert data["open_ports"][0]["port"] == 80
# --- get_network_topology Tests ---
@patch('models.device_instance.get_temp_db_connection')
def test_get_network_topology(mock_db_conn, client, api_token):
"""Test get_network_topology."""
# Mock database connection for topology query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchall.return_value = [
{"devName": "Router", "devMac": "AA:AA:AA:AA:AA:AA", "devParentMAC": None, "devParentPort": None, "devVendor": "VendorA"},
{"devName": "Device1", "devMac": "BB:BB:BB:BB:BB:BB", "devParentMAC": "AA:AA:AA:AA:AA:AA", "devParentPort": "eth1", "devVendor": "VendorB"}
]
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
response = client.get('/devices/network/topology',
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert len(data["nodes"]) == 2
assert len(data["links"]) == 1
assert data["links"][0]["source"] == "AA:AA:AA:AA:AA:AA"
assert data["links"][0]["target"] == "BB:BB:BB:BB:BB:BB"
# --- get_recent_alerts Tests ---
@patch('models.event_instance.get_temp_db_connection')
def test_get_recent_alerts(mock_db_conn, client, api_token):
"""Test get_recent_alerts."""
# Mock database connection for events query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
now = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
mock_execute_result.fetchall.return_value = [
{"eve_DateTime": now, "eve_EventType": "New Device", "eve_MAC": "AA:BB:CC:DD:EE:FF"}
]
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
response = client.get('/events/recent',
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert data["hours"] == 24
# --- Device Alias Tests ---
@patch('api_server.api_server_start.update_device_column')
def test_set_device_alias(mock_update_col, client, api_token):
"""Test set_device_alias."""
mock_update_col.return_value = {"success": True, "message": "Device alias updated"}
payload = {"alias": "New Device Name"}
response = client.post('/device/AA:BB:CC:DD:EE:FF/set-alias',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
mock_update_col.assert_called_once_with("AA:BB:CC:DD:EE:FF", "devName", "New Device Name")
@patch('api_server.api_server_start.update_device_column')
def test_set_device_alias_not_found(mock_update_col, client, api_token):
"""Test set_device_alias when device is not found."""
mock_update_col.return_value = {"success": False, "error": "Device not found"}
payload = {"alias": "New Device Name"}
response = client.post('/device/FF:FF:FF:FF:FF:FF/set-alias',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is False
assert "Device not found" in data["error"]
# --- Wake-on-LAN Tests ---
@patch('api_server.api_server_start.wakeonlan')
def test_wol_wake_device(mock_wakeonlan, client, api_token):
"""Test wol_wake_device."""
mock_wakeonlan.return_value = {"success": True, "message": "WOL packet sent to AA:BB:CC:DD:EE:FF"}
payload = {"devMac": "AA:BB:CC:DD:EE:FF"}
response = client.post('/nettools/wakeonlan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert "AA:BB:CC:DD:EE:FF" in data["message"]
def test_wol_wake_device_invalid_mac(client, api_token):
"""Test wol_wake_device with invalid MAC."""
payload = {"devMac": "invalid-mac"}
response = client.post('/nettools/wakeonlan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 400
data = response.get_json()
assert data["success"] is False
# --- OpenAPI Spec Tests ---
# --- Latest Device Tests ---
@patch('models.device_instance.get_temp_db_connection')
def test_get_latest_device(mock_db_conn, client, api_token):
"""Test get_latest_device endpoint."""
# Mock database connection for latest device query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchone.return_value = {
"devName": "Latest Device",
"devMac": "AA:BB:CC:DD:EE:FF",
"devLastIP": "192.168.1.100",
"devFirstConnection": "2025-12-07 10:30:00"
}
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
response = client.get('/devices/latest',
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert len(data) == 1
assert data[0]["devName"] == "Latest Device"
assert data[0]["devMac"] == "AA:BB:CC:DD:EE:FF"
def test_openapi_spec(client, api_token):
"""Test openapi_spec endpoint contains MCP tool paths."""
response = client.get('/mcp/sse/openapi.json', headers=auth_headers(api_token))
assert response.status_code == 200
spec = response.get_json()
# Check for MCP tool endpoints in the spec with correct paths
assert "/nettools/trigger-scan" in spec["paths"]
assert "/device/open_ports" in spec["paths"]
assert "/devices/network/topology" in spec["paths"]
assert "/events/recent" in spec["paths"]
assert "/device/{mac}/set-alias" in spec["paths"]
assert "/nettools/wakeonlan" in spec["paths"]