Compare commits

..

30 Commits

Author SHA1 Message Date
jokob-sk
1bd6723ab9 DOCS: pihole DB troubleshooting permissions #1330
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-07 21:53:46 +11:00
jokob-sk
73c8965637 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-12-07 21:41:09 +11:00
jokob-sk
dc7ff8317c DOCS: pihole DB troubleshooting permissions #1330
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-07 21:40:40 +11:00
Jokob @NetAlertX
cd1ce2a3d8 Merge pull request #1333 from KihtrakRaknas/patch-2
Remove dev branch from docker compose file
2025-12-07 09:27:38 +00:00
Karthik Sankar
c6de72467e Update Docker image tag in documentation
Remove -dev
2025-12-07 03:39:47 -05:00
Jokob @NetAlertX
6ee9064676 Merge pull request #1332 from adamoutler/patch-7
Some checks failed
Code checks / docker-tests (push) Has been cancelled
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Change copy command to install with permissions
2025-12-07 00:02:45 +00:00
Adam Outler
2c75285148 Coderabit nitpick. 2025-12-06 13:05:47 +00:00
Adam Outler
ecb5c1455b Add missing field to initial db 2025-12-06 13:01:47 +00:00
Adam Outler
17f495c444 Change copy command to install with permissions 2025-12-06 13:01:10 +00:00
jokob-sk
e7f25560c8 DOCS: new icon
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-06 12:21:19 +11:00
jokob-sk
fc4d32ebe7 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX 2025-12-06 11:58:57 +11:00
jokob-sk
b47325d06a DOCS: SYNOLOGY permissions guide #1310
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-06 11:58:36 +11:00
jokob-sk
436ac6de49 FE: network tree mobile screens work #1209
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-06 11:58:08 +11:00
Jokob @NetAlertX
c1bd611e57 Fix formatting of migration instructions
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
2025-12-04 22:44:20 +11:00
Jokob @NetAlertX
edde2596b5 Fix typo and add writable paths check in migration guide
Corrected a typo in the instructions and added a new step for checking writable paths.
2025-12-04 22:43:02 +11:00
jokob-sk
da9d37c718 DOCS: SYNOLOGY permissions guide #1310
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-04 16:11:25 +11:00
jokob-sk
5bcb727305 DOCS: SYNOLOGY permissions guide #1310
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-04 16:09:38 +11:00
jokob-sk
2dc688b16c DOCS: SYNOLOGY permissions guide #1310
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-04 16:03:11 +11:00
jokob-sk
0ac9fd79b3 DOCS: SYNOLOGY permissions guide #1310
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-04 15:59:02 +11:00
jokob-sk
3d17dc47b5 BE: ensure /db - better error #1327
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-04 10:22:34 +11:00
jokob-sk
ef2e7886c4 BE: ensure /db - reorder scripts #1327
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-04 09:57:46 +11:00
jokob-sk
c8f3a84b92 BE: ensure /db and /config dirs - reorder scripts #1327
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-03 20:56:42 +11:00
jokob-sk
9688fee2d2 BE: ensure /db and /config dirs #1327
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-03 20:18:39 +11:00
jokob-sk
2dcd9eda19 BE: re-implement APP_CONF_OVERRIDE support
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-03 19:35:45 +11:00
jokob-sk
24187495e1 BE: debug - removal of GRAPHQL PORT conflict check
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-03 18:46:42 +11:00
jokob-sk
c27d25d4ab DOCS: ip flipping docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-03 18:06:59 +11:00
jokob-sk
93a2dad2eb DOCS: pihole guide docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-03 17:59:30 +11:00
jokob-sk
b235863644 Merge branch 'main' of https://github.com/jokob-sk/NetAlertX
Some checks failed
Code checks / check-url-paths (push) Has been cancelled
Code checks / lint (push) Has been cancelled
Code checks / docker-tests (push) Has been cancelled
docker / docker_dev (push) Has been cancelled
Deploy MkDocs / deploy (push) Has been cancelled
2025-12-03 13:03:05 +11:00
jokob-sk
f387f8c5b6 DOCS: installation docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-03 13:02:36 +11:00
jokob-sk
d93a3981fa DOCS: migration docs
Signed-off-by: jokob-sk <jokob.sk@gmail.com>
2025-12-01 19:32:55 +11:00
36 changed files with 797 additions and 1937 deletions

View File

@@ -26,7 +26,7 @@ ENV PATH="/opt/venv/bin:$PATH"
# Install build dependencies
COPY requirements.txt /tmp/requirements.txt
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git rust cargo \
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \
&& python -m venv /opt/venv
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy

View File

@@ -112,3 +112,11 @@ Slowness can be caused by:
> See [Performance Tips](./PERFORMANCE.md) for detailed optimization steps.
#### IP flipping
With `ARPSCAN` scans some devices might flip IP addresses after each scan triggering false notifications. This is because some devices respond to broadcast calls and thus different IPs after scans are logged.
See how to prevent IP flipping in the [ARPSCAN plugin guide](/front/plugins/arp_scan/README.md).
Alternatively adjust your [notification settings](./NOTIFICATIONS.md) to prevent false positives by filtering out events or devices.

View File

@@ -17,7 +17,7 @@ services:
netalertx:
#use an environmental variable to set host networking mode if needed
container_name: netalertx # The name when you docker contiainer ls
image: ghcr.io/jokob-sk/netalertx-dev:latest
image: ghcr.io/jokob-sk/netalertx:latest
network_mode: ${NETALERTX_NETWORK_MODE:-host} # Use host networking for ARP scanning and other services
read_only: true # Make the container filesystem read-only

View File

@@ -61,20 +61,38 @@ See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/
| Required | Path | Description |
| :------------- | :------------- | :-------------|
| ✅ | `:/data` | Folder which will contain the `/db/app.db`, `/config/app.conf` & `/config/devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is teh same as on teh server. |
| ✅ | `:/data` | Folder which needs to contain a `/db` and `/config` sub-folders. |
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is the same as on the server. |
| | `:/tmp/log` | Logs folder useful for debugging if you have issues setting up the container |
| | `:/tmp/api` | The [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. Path configurable via `NETALERTX_API` environment variable. |
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
> Use separate `db` and `config` directories, do not nest them.
### Folder structure
Use separate `db` and `config` directories, do not nest them:
```
data
├── config
└── db
```
### Permissions
If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
```bash
sudo chown -R 20211:20211 /local_data_dir
sudo chmod -R a+rwx /local_data_dir
```
### Initial setup
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/data/config/` folder directly
#### Setting up scanners
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.

View File

@@ -278,8 +278,9 @@ Run the container with the `--user "0"` parameter. Please note, some systems wil
```sh
docker run -it --rm --name netalertx --user "0" \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
-v /local_data_dir/config:/app/config \
-v /local_data_dir/db:/app/db \
-v /local_data_dir:/data \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
ghcr.io/jokob-sk/netalertx:latest
```
@@ -295,4 +296,6 @@ sudo chown -R 20211:20211 /local_data_dir
sudo chmod -R a+rwx /local_data_dir
```
8. Start the container and verify everything works as expected.
8. Start the container and verify everything works as expeexpected.
9. Check the [Permissions -> Writable-paths](https://jokob-sk.github.io/NetAlertX/FILE_PERMISSIONS/#writable-paths) what directories to mount if you'd like to access the API or log files.

View File

@@ -1,8 +1,29 @@
# Integration with PiHole
NetAlertX comes with 2 plugins suitable for integrating with your existing PiHole instance. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other [plugins](/docs/PLUGINS.md).
NetAlertX comes with 3 plugins suitable for integrating with your existing PiHole instance. The first plugin uses the v6 API, the second plugin is using a direct SQLite DB connection, the other leverages the `DHCP.leases` file generated by PiHole. You can combine multiple approaches and also supplement scans with other [plugins](/docs/PLUGINS.md).
## Approach 1: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file
## Approach 1: `PIHOLEAPI` Plugin - Import devices directly from PiHole v6 API
![PIHOLEAPI sample settings](./img/PIHOLE_GUIDE/PIHOLEAPI_settings.png)
To use this approach make sure the Web UI password in **Pi-hole** is set.
| Setting | Description | Recommended value |
| :------------- | :------------- | :-------------|
| `PIHOLEAPI_URL` | Your Pi-hole base URL including port. | `http://192.168.1.82:9880/` |
| `PIHOLEAPI_RUN_SCHD` | If you run multiple device scanner plugins, align the schedules of all plugins to the same value. | `*/5 * * * *` |
| `PIHOLEAPI_PASSWORD` | The Web UI base64 encoded (en-/decoding handled by the app) admin password. | `passw0rd` |
| `PIHOLEAPI_SSL_VERIFY` | Whether to verify HTTPS certificates. Disable only for self-signed certificates. | `False` |
| `PIHOLEAPI_API_MAXCLIENTS` | Maximum number of devices to request from Pi-hole. Defaults are usually fine. | `500` |
| `PIHOLEAPI_FAKE_MAC` | Generate FAKE MAC from IP. | `False` |
Check the [PiHole API plugin readme](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_api_scan/) for details and troubleshooting.
### docker-compose changes
No changes needed
## Approach 2: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file
![DHCPLSS sample settings](./img/PIHOLE_GUIDE/DHCPLSS_pihole_settings.png)
@@ -23,12 +44,12 @@ Check the [DHCPLSS plugin readme](https://github.com/jokob-sk/NetAlertX/tree/mai
| `:/etc/pihole/dhcp.leases` | PiHole's `dhcp.leases` file. Required if you want to use PiHole `dhcp.leases` file. This has to be matched with a corresponding `DHCPLSS_paths_to_check` setting entry (the path in the container must contain `pihole`) |
## Approach 2: `PIHOLE` Plugin - Import devices directly from the PiHole database
## Approach 3: `PIHOLE` Plugin - Import devices directly from the PiHole database
![DHCPLSS sample settings](./img/PIHOLE_GUIDE/PIHOLE_settings.png)
| Setting | Description | Recommended value |
| :------------- | :------------- | :-------------|
| :------------- | :------------- | :-------------|
| `PIHOLE_RUN` | When the plugin should run. | `schedule` |
| `PIHOLE_RUN_SCHD` | If you run multiple device scanner plugins, align the schedules of all plugins to the same value. | `*/5 * * * *` |
| `PIHOLE_DB_PATH` | You need to map the value in this setting in the `docker-compose.yml` file. | `/etc/pihole/pihole-FTL.db` |

View File

@@ -53,7 +53,6 @@ You can configure a custom **/etc/resolv.conf** file in **docker-compose.yml** a
#### docker-compose.yml:
```yaml
version: "3"
services:
netalertx:
container_name: netalertx

View File

@@ -9,21 +9,23 @@ The folders you are creating below will contain the configuration and the databa
1. Create a parent folder named `netalertx`
2. Create a `db` sub-folder
![Folder structure](./img/SYNOLOGY/01_Create_folder_structure.png)
![Folder structure](./img/SYNOLOGY/02_Create_folder_structure_db.png)
![Folder structure](./img/SYNOLOGY/03_Create_folder_structure_db.png)
![Folder structure](./img/SYNOLOGY/01_Create_folder_structure.png)
![Folder structure](./img/SYNOLOGY/02_Create_folder_structure_db.png)
![Folder structure](./img/SYNOLOGY/03_Create_folder_structure_db.png)
3. Create a `config` sub-folder
![Folder structure](./img/SYNOLOGY/04_Create_folder_structure_config.png)
![Folder structure](./img/SYNOLOGY/04_Create_folder_structure_config.png)
4. Note down the folders Locations:
![Getting the location](./img/SYNOLOGY/05_Access_folder_properties.png)
![Getting the location](./img/SYNOLOGY/06_Note_location.png)
![Getting the location](./img/SYNOLOGY/05_Access_folder_properties.png)
![Getting the location](./img/SYNOLOGY/06_Note_location.png)
5. Open **Container manager** -> **Project** and click **Create**.
6. Fill in the details:
## Creating the Project
1. Open **Container manager** -> **Project** and click **Create**.
2. Fill in the details:
- Project name: `netalertx`
- Path: `/app_storage/netalertx` (will differ from yours)
@@ -31,7 +33,6 @@ The folders you are creating below will contain the configuration and the databa
```yaml
version: "3"
services:
netalertx:
container_name: netalertx
@@ -57,27 +58,32 @@ services:
- PORT=20211
```
![Project settings](./img/SYNOLOGY/07_Create_project.png)
![Project settings](./img/SYNOLOGY/07_Create_project.png)
7. Replace the paths to your volume and comment out unnecessary line(s):
3. Replace the paths to your volume and comment out unnecessary line(s).
- This is only an example, your paths will differ.
> This is only an example, your paths will differ.
```yaml
volumes:
volumes:
- /volume1/app_storage/netalertx:/data
```
![Adjusting docker-compose](./img/SYNOLOGY/08_Adjust_docker_compose_volumes.png)
![Adjusting docker-compose](./img/SYNOLOGY/08_Adjust_docker_compose_volumes.png)
8. (optional) Change the port number from `20211` to an unused port if this port is already used.
9. Build the project:
4. (optional) Change the port number from `20211` to an unused port if this port is already used.
5. Build the project:
![Build](./img/SYNOLOGY/09_Run_and_build.png)
![Build](./img/SYNOLOGY/09_Run_and_build.png)
10. Navigate to `<Synology URL>:20211` (or your custom port).
11. Read the [Subnets](./SUBNETS.md) and [Plugins](/docs/PLUGINS.md) docs to complete your setup.
## Solving permission issues
See also the [Permission overview guide](./FILE_PERMISSIONS.md).
### Configuring the permissions via SSH
> [!TIP]
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
@@ -86,3 +92,31 @@ services:
>
> `sudo chmod -R a+rwx /local_data_dir`
>
### Configuring the permissions via the Synology UI
You can also execute the above bash commands via the UI by creating a one-off scheduled task.
1. Control panel -> Task Scheduler
2. Create -> Scheduled Task -> User-defined Script
![User-defined Script](./img/SYNOLOGY/11_permissions_create_scheduled_task.png)
3. Give your task a name.
![User-defined task_general](./img/SYNOLOGY/12_permissions_task_general.png)
4. Specify one-off execution time (e.g. 5 minutes from now).
![task_schedule](./img/SYNOLOGY/13_permissions_task_schedule.png)
5. Paste the commands from the above SSH section and replace the `/local_data_dir` with the parent fodler of your `/db` and `/config` folders.
![task_settings](./img/SYNOLOGY/14_permissions_task_settings.png)
6. Wait until the execution time passes and verify the new ownership.
![permissions_after](./img/SYNOLOGY/15_permissions_after.png)
In case of issues, double-check the [Permission overview guide](./FILE_PERMISSIONS.md).

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.0 KiB

After

Width:  |  Height:  |  Size: 7.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

View File

@@ -1825,10 +1825,21 @@ input[readonly] {
#networkTree
{
margin-left: 16px;
/* border: solid;
border-color:#606060; */
position: relative;
width: 100%;
max-width: 100%;
overflow: hidden;
}
#networkTree .node-inner {
font-size: clamp(12px, 1rem, 18px);
}
#networkTree .netNodeText strong,
#networkTree .spanNetworkTree {
font-size: inherit;
}
#networkTree .netIcon
{
width: 25px;

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.5 KiB

View File

@@ -0,0 +1,452 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="200"
height="200"
viewBox="0 0 52.916667 52.916668"
version="1.1"
id="svg5"
inkscape:version="1.1.2 (b8e25be833, 2022-02-05)"
sodipodi:docname="netalertx_red_docs_copy3_blue.svg"
inkscape:export-filename="C:\Users\jokob\netalertx_red_docs_d_1.png"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="true"
inkscape:document-units="mm"
showgrid="false"
inkscape:zoom="2.8284271"
inkscape:cx="132.40575"
inkscape:cy="118.44039"
inkscape:window-width="3377"
inkscape:window-height="1417"
inkscape:window-x="55"
inkscape:window-y="-8"
inkscape:window-maximized="1"
inkscape:current-layer="g48055"
units="px"
width="50px" />
<defs
id="defs2">
<inkscape:path-effect
effect="powermask"
id="path-effect51283"
is_visible="true"
lpeversion="1"
uri="#mask-powermask-path-effect51283"
invert="false"
hide_mask="false"
background="true"
background_color="#ffffffff" />
<inkscape:path-effect
effect="powermask"
id="path-effect51278"
is_visible="true"
lpeversion="1"
uri="#mask-powermask-path-effect51278"
invert="false"
hide_mask="false"
background="true"
background_color="#ffffffff" />
<inkscape:path-effect
effect="powermask"
id="path-effect51273"
is_visible="true"
lpeversion="1"
uri="#mask-powermask-path-effect51273"
invert="false"
hide_mask="false"
background="true"
background_color="#ffffffff" />
<inkscape:path-effect
effect="powermask"
id="path-effect48754"
is_visible="true"
lpeversion="1"
uri="#mask-powermask-path-effect48754"
invert="false"
hide_mask="false"
background="true"
background_color="#ffffffff" />
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath48972">
<path
style="fill:#000000;stroke-width:0.280643"
id="path48974"
width="56.128242"
height="56.128246"
x="-18.924671"
y="-56.198174"
transform="rotate(45.438374)"
mask="none"
sodipodi:type="rect" />
</clipPath>
<mask
maskUnits="userSpaceOnUse"
id="mask49405">
<text
xml:space="preserve"
style="font-size:60.8695px;line-height:1.25;font-family:Amiri;-inkscape-font-specification:Amiri;display:inline;stroke-width:1.52174"
x="66.930733"
y="78.642288"
id="text49409"
transform="scale(1.4861626,0.67287388)"><tspan
sodipodi:role="line"
id="tspan49407"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:'Tw Cen MT';-inkscape-font-specification:'Tw Cen MT';fill:#ffffff;stroke-width:1.52174"
x="66.930733"
y="78.642288">A</tspan></text>
</mask>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath50306">
<circle
style="mix-blend-mode:normal;fill:#d40000;stroke-width:0.176318"
id="circle50308"
cy="26.458334"
cx="26.458334"
r="26.458334"
clip-path="url(#clipPath48972)"
transform="matrix(1.0038771,0,0.00391255,1.0073928,-0.04603368,-0.1228191)" />
</clipPath>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath48972-7">
<path
style="fill:#000000;stroke-width:0.280643"
id="path48974-5"
width="56.128242"
height="56.128246"
x="-18.924671"
y="-56.198174"
transform="rotate(45.438374)"
mask="none"
sodipodi:type="rect" />
</clipPath>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath50306-6">
<circle
style="mix-blend-mode:normal;fill:#d40000;stroke-width:0.176318"
id="circle50308-5"
cy="26.458334"
cx="26.458334"
r="26.458334"
clip-path="url(#clipPath48972)"
transform="matrix(1.0038771,0,0.00391255,1.0073928,-0.04603368,-0.1228191)" />
</clipPath>
<mask
maskUnits="userSpaceOnUse"
id="mask-powermask-path-effect51273">
<path
id="mask-powermask-path-effect51273_box"
style="fill:#ffffff;fill-opacity:1"
d="m 71.788348,33.677177 h 2.00083 v 2.173766 h -2.00083 z" />
<path
style="fill:#000000"
id="path51263"
sodipodi:type="arc"
sodipodi:cx="66.211845"
sodipodi:cy="37.490814"
sodipodi:rx="3.9464016"
sodipodi:ry="1.4616301"
sodipodi:start="0"
sodipodi:end="0.031086059"
sodipodi:open="true"
sodipodi:arc-type="arc"
d="m 70.158247,37.490814 a 3.9464016,1.4616301 0 0 1 -0.0019,0.04543" />
</mask>
<mask
maskUnits="userSpaceOnUse"
id="mask-powermask-path-effect51278">
<path
style="fill:#000000"
id="path51267"
sodipodi:type="arc"
sodipodi:cx="66.211845"
sodipodi:cy="37.490814"
sodipodi:rx="3.9464016"
sodipodi:ry="1.4616301"
sodipodi:start="0"
sodipodi:end="0.031086059"
sodipodi:open="true"
sodipodi:arc-type="arc" />
</mask>
<mask
maskUnits="userSpaceOnUse"
id="mask-powermask-path-effect51283">
<path
style="fill:#000000"
id="path51271"
sodipodi:type="arc"
sodipodi:cx="66.211845"
sodipodi:cy="37.490814"
sodipodi:rx="3.9464016"
sodipodi:ry="1.4616301"
sodipodi:start="0"
sodipodi:end="0.031086059"
sodipodi:open="true"
sodipodi:arc-type="arc" />
</mask>
<filter
id="mask-powermask-path-effect51273_inverse"
inkscape:label="filtermask-powermask-path-effect51273"
style="color-interpolation-filters:sRGB"
height="100"
width="100"
x="-50"
y="-50">
<feColorMatrix
id="mask-powermask-path-effect51273_primitive1"
values="1"
type="saturate"
result="fbSourceGraphic" />
<feColorMatrix
id="mask-powermask-path-effect51273_primitive2"
values="-1 0 0 0 1 0 -1 0 0 1 0 0 -1 0 1 0 0 0 1 0 "
in="fbSourceGraphic" />
</filter>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath1481">
<rect
style="fill:#ffffff;stroke-width:0.227484"
id="rect1483"
width="26.653997"
height="52.852543"
x="62.86179"
y="-0.46772188" />
</clipPath>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath1481-1">
<rect
style="fill:#ffffff;stroke-width:0.227484"
id="rect1483-0"
width="26.653997"
height="52.852543"
x="62.86179"
y="-0.46772188" />
</clipPath>
</defs>
<g
inkscape:groupmode="layer"
id="layer3"
inkscape:label="Red 1"
style="display:none">
<circle
style="fill:#ff2a2a;stroke-width:0.176318"
id="path31-8"
cy="26.458334"
cx="26.458334"
r="26.458334" />
</g>
<g
inkscape:groupmode="layer"
id="layer2"
inkscape:label="A - Layer 2"
style="display:none">
<rect
style="fill:#ffffff;stroke-width:0.328992"
id="rect48998"
width="26.0966"
height="6.0620313"
x="13.255443"
y="41.262722" />
</g>
<g
inkscape:groupmode="layer"
id="g48055"
inkscape:label="Red top"
style="display:none;mix-blend-mode:normal">
<circle
style="mix-blend-mode:normal;fill:#d40000;stroke-width:0.176318"
id="circle48752"
cy="26.458334"
cx="26.458334"
r="26.458334"
clip-path="url(#clipPath48972)"
transform="matrix(1.0038771,0,0.00391255,1.0073928,-0.04603368,-0.1228191)" />
<ellipse
style="display:inline;mix-blend-mode:normal;fill:#000000;stroke-width:0.43638"
id="path50080"
clip-path="url(#clipPath50306)"
ry="13.739323"
rx="16.735666"
cy="22.874514"
cx="26.36149"
transform="translate(0,0.09980904)" />
<path
style="fill:#000000"
id="path51325"
sodipodi:type="arc"
sodipodi:cx="16.772207"
sodipodi:cy="26.090099"
sodipodi:rx="4.1291056"
sodipodi:ry="7.6004772"
sodipodi:start="0"
sodipodi:end="0.031086059"
sodipodi:arc-type="slice"
d="m 20.901313,26.090099 a 4.1291056,7.6004772 0 0 1 -0.002,0.236231 l -4.127111,-0.236231 z" />
<path
style="fill:#d40000"
id="path51717"
sodipodi:type="arc"
sodipodi:cx="26.441042"
sodipodi:cy="-26.531424"
sodipodi:rx="10.418671"
sodipodi:ry="9.5820541"
sodipodi:start="0.82219863"
sodipodi:end="2.3054129"
sodipodi:arc-type="slice"
d="m 33.532115,-19.511189 a 10.418671,9.5820541 0 0 1 -14.074736,0.09049 l 6.983663,-7.110726 z"
transform="matrix(1,0,0.0048047,-0.99998846,0,0)" />
<path
style="fill:#ffffff;stroke-width:0.276214"
d="M 145.28835,50.354872 C 127.01317,34.62734 98.057144,30.012421 73.710372,38.947003 c -6.518003,2.391924 -14.288822,6.834002 -19.265958,11.01311 -1.198654,1.006465 -2.270358,1.829935 -2.381565,1.829935 -0.111206,0 -5.210052,-5.102002 -11.33077,-11.337781 L 29.603503,29.114489 30.822139,27.851613 c 0.670251,-0.69458 2.51592,-2.384634 4.101489,-3.755674 C 50.725112,10.43241 69.462577,2.3767456 90.736164,0.10085492 95.380582,-0.39601422 106.33043,-0.31105699 111.03786,0.25837091 133.04363,2.9202648 151.46536,11.26468 167.83762,25.986722 l 3.30701,2.97369 -2.29392,2.320103 c -1.26165,1.276057 -6.58213,6.517685 -11.82329,11.648065 l -9.52936,9.327957 z"
id="path52311"
transform="scale(0.26458333)" />
<path
style="fill:#ffffff;stroke-width:0.276214"
d="M 86.538548,86.634546 74.145111,73.25799 74.899337,72.758689 c 4.93766,-3.268754 10.138703,-6.508578 16.602198,-7.437693 5.484021,-0.788317 12.228205,-0.984814 16.377135,-0.09119 6.77689,1.459652 11.87156,4.340971 17.02452,7.792011 l 0.97468,0.652765 -1.37124,1.269268 c -0.86863,0.804036 -6.82647,6.676301 -13.34742,13.259175 L 99.423152,99.796276 Z"
id="path52350"
transform="scale(0.26458333)"
inkscape:export-filename="C:\Users\jokob\path52350.png"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"
sodipodi:nodetypes="ccsssscsscc" />
</g>
<g
inkscape:label="Black"
inkscape:groupmode="layer"
id="layer1"
style="display:inline">
<ellipse
style="fill:#000000;stroke-width:0.176146"
id="path31"
cy="26.51001"
cx="26.458334"
rx="26.458"
ry="26.406658" />
<circle
style="display:inline;fill:#ffffff;stroke-width:0.176318"
id="path31-89"
mask="url(#mask49405)"
transform="translate(-99.990036,0.02979629)"
r="26.458334"
cy="26.458334"
cx="126.45834" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="M 50.734917,51.5385 C 50.317784,51.008202 45.376222,45.855755 39.753667,40.088624 L 29.530842,29.602927 32.157037,27.108298 C 37.014258,22.494413 44.043654,17.26825 51.002109,13.097503 60.785219,7.2337198 74.185013,2.5922331 86.866814,0.67450934 92.65309,-0.20048258 104.71024,-0.37258331 110.80487,0.33282367 133.37755,2.9454414 150.98136,11.201829 167.87245,27.098183 l 2.76303,2.600302 -11.44673,11.421726 -11.44672,11.421723 -2.63001,-2.20425 C 135.80913,42.540775 123.7472,37.357565 110.13188,35.306142 105.25895,34.571936 94.151456,34.473316 89.625785,35.124073 76.006414,37.082441 65.655848,41.542025 54.928431,50.073566 c -1.679878,1.336011 -3.139997,2.429113 -3.244707,2.429113 -0.104711,0 -0.531674,-0.433881 -0.948807,-0.964179 z"
id="path117144"
transform="scale(0.26458333)" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="m 86.479201,86.655988 -12.859682,-12.863304 1.72756,-1.259375 c 5.937867,-4.328648 15.716974,-7.877579 22.763988,-8.261269 5.344243,-0.290978 12.593953,1.304433 19.011433,4.183761 2.41258,1.08245 8.21218,4.752269 8.21218,5.196429 0,0.224653 -16.50779,16.711429 -23.16256,23.133076 l -2.833236,2.733985 z"
id="path117183"
transform="scale(0.26458333)" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="m 151.14408,181.37289 -2.63396,-2.65165 H 99.719219 50.928317 l -2.558625,2.54155 c -2.367982,2.35218 -2.618861,2.50924 -3.367071,2.10794 -1.632484,-0.87558 -7.984339,-5.82527 -11.691442,-9.11058 l -3.811927,-3.3782 34.882231,-35.14801 c 19.185224,-19.3314 34.980859,-35.144 35.101403,-35.13912 0.120544,0.005 16.129074,15.83285 35.574514,35.17326 l 35.35534,35.16438 -2.12132,1.95782 c -4.15184,3.83183 -13.51513,11.13426 -14.27653,11.13426 -0.13027,0 -1.42214,-1.19324 -2.87081,-2.65165 z M 112.69455,143.27811 99.52528,130.10884 86.35601,143.27811 73.18674,156.44738 h 26.33854 26.33854 z"
id="path117222"
transform="scale(0.26458333)" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="m 43.323744,182.02493 c -3.122315,-2.21745 -8.886633,-6.91043 -11.466851,-9.33566 l -2.129855,-2.00191 31.417516,-31.60357 c 17.279634,-17.38196 32.982312,-33.19165 34.894842,-35.13266 l 3.477325,-3.5291 35.271229,35.28278 35.27123,35.28278 -2.29809,2.12417 c -3.23874,2.99361 -8.21439,6.9674 -11.21429,8.95625 l -2.55224,1.69205 -2.04396,-1.77268 c -1.12418,-0.97498 -2.34704,-2.10872 -2.71748,-2.51941 l -0.67351,-0.74673 H 99.52196 50.484308 l -2.199537,2.47487 c -1.209746,1.36118 -2.306828,2.46959 -2.437961,2.46312 -0.131132,-0.006 -1.266512,-0.7419 -2.523066,-1.6343 z m 82.364486,-25.84451 c 0,-0.14683 -5.88666,-6.15201 -13.08147,-13.34485 L 99.52528,129.75768 86.443805,142.83557 c -7.194812,7.19284 -13.081476,13.19802 -13.081476,13.34485 0,0.14683 11.773328,0.26696 26.162951,0.26696 14.38962,0 26.16295,-0.12013 26.16295,-0.26696 z"
id="path117261"
transform="scale(0.26458333)" />
</g>
<g
inkscape:groupmode="layer"
id="layer6"
inkscape:label="Circle"
style="display:none">
<path
style="fill:#000000"
id="path50026"
sodipodi:type="arc"
sodipodi:cx="71.071762"
sodipodi:cy="34.677177"
sodipodi:rx="1.7174155"
sodipodi:ry="5.5907354"
sodipodi:start="0"
sodipodi:end="0.031086059"
sodipodi:open="true"
sodipodi:arc-type="arc"
mask="url(#mask-powermask-path-effect51273)"
d="m 72.789178,34.677177 a 1.7174155,5.5907354 0 0 1 -8.3e-4,0.173766"
inkscape:path-effect="#path-effect51273" />
<path
style="fill:#ffffff;stroke-width:0.276214"
d="m 151.08883,181.46994 -2.76213,-2.60427 -48.802077,-0.009 -48.802075,-0.009 -2.292573,2.48592 c -1.260915,1.36726 -2.431589,2.48592 -2.601499,2.48592 -0.869396,0 -9.118995,-6.36599 -13.713669,-10.58246 l -2.688104,-2.46684 34.973647,-35.11455 c 19.235503,-19.313 34.922993,-35.39075 35.029879,-35.39075 0.106889,0 16.231201,16.10588 35.663001,35.45326 l 35.33055,35.17705 -2.48592,2.35505 c -3.08951,2.92687 -7.41515,6.40509 -11.09719,8.92319 -1.54594,1.05725 -2.85105,1.91728 -2.90024,1.9112 -0.0492,-0.006 -1.33242,-1.183 -2.8516,-2.61535 z m -38.4631,-38.32188 -13.050732,-13.05073 -13.050727,13.05073 -13.050725,13.05072 h 26.101452 26.101452 z"
id="path52389"
transform="scale(0.26458333)"
inkscape:export-filename="C:\Users\jokob\path52389.png"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"
sodipodi:nodetypes="ccccssscssscsscccccccccc" />
<path
style="fill:#d40000;stroke-width:0.276214"
d="M 86.416478,86.793237 C 73.427951,73.815968 73.387119,73.801376 73.387119,73.801376 c 3.874197,-3.341721 11.025508,-6.981646 17.312424,-8.529335 2.339787,-0.576001 4.881362,-1.25628 8.810591,-1.259564 4.438736,-0.0037 8.292516,0.857843 13.253396,2.535104 4.59135,1.552325 7.8315,3.224336 11.49958,5.934101 l 1.61476,1.192897 -2.31005,2.336325 c -1.27053,1.284978 -7.22284,7.16236 -13.22736,13.060849 L 99.423152,99.796276 C 95.128284,95.409033 87.282899,87.658907 86.416478,86.793237 Z"
id="path52465"
transform="scale(0.26458333)"
sodipodi:nodetypes="sssssscsscs" />
<path
style="fill:#d40000;stroke-width:0.074168"
d="M 38.412677,13.39572 C 34.322163,9.945267 28.437517,8.4874766 22.684204,9.4993379 19.419721,10.073478 16.752307,11.410793 13.835187,13.872492 l -0.14691,0.126732 -0.587936,-0.661605 c -0.268568,-0.30222 -1.619514,-1.65761 -2.963235,-3.048642 L 7.7265561,7.8632145 7.9975963,7.5868118 C 9.8344314,5.713635 13.005888,3.476019 15.380049,2.3878744 20.659765,-0.03196726 26.24205,-0.73479764 31.856076,0.42838695 36.599757,1.4112419 40.746004,3.5106537 44.46876,7.1557672 l 0.709881,0.6950753 -0.663694,0.69037 C 44.080041,8.9935983 42.672626,10.391271 41.3963,11.655819 L 39.075708,13.955 Z"
id="path52504"
inkscape:export-filename="C:\Users\jokob\path52504.png"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"
sodipodi:nodetypes="ssscsccsssscsscs" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="M 86.655143,86.478376 73.973101,73.792663 75.700647,72.517799 c 3.888483,-2.869556 11.979097,-6.234087 17.887709,-7.438714 6.781224,-1.382532 16.632394,0.1812 23.791374,3.776537 2.53147,1.271345 7.60139,4.47823 7.60139,4.808126 0,0.217537 -18.217,18.402022 -23.34018,23.298518 l -2.303755,2.201823 z"
id="path117417"
transform="scale(0.26458333)" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="M 86.653362,86.476595 74.004328,73.8239 l 1.78137,-1.307646 c 4.058289,-2.979059 11.996346,-6.266814 18.081148,-7.488783 5.742499,-1.153228 13.433334,-0.173122 20.711924,2.639491 2.64803,1.02326 7.63077,3.765523 9.69377,5.334995 l 0.88241,0.67131 -6.36248,6.41376 c -3.49937,3.527567 -9.3162,9.255172 -12.92628,12.728011 l -6.563793,6.314253 z"
id="path117456"
transform="scale(0.26458333)" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="M 40.755089,40.913849 29.891381,29.698485 32.789887,26.931909 C 38.664423,21.324762 48.374309,14.517657 56.038213,10.633695 66.085649,5.5417911 79.271822,1.6347929 90.224457,0.50447904 c 5.29419,-0.54636158 20.003853,-0.24145692 24.614013,0.51020386 16.55879,2.6998184 30.27274,8.3744041 42.56518,17.6127021 3.66685,2.755798 10.38919,8.484428 12.02678,10.248962 l 0.78546,0.846346 -11.22765,11.223531 -11.22764,11.223531 -2.46252,-1.977749 C 130.84681,38.585569 112.25268,33.14502 92.666988,34.792406 78.082451,36.019136 67.49078,40.200159 55.292129,49.545997 c -1.868753,1.431721 -3.459743,2.598649 -3.535534,2.593173 -0.07579,-0.0055 -5.026468,-5.056871 -11.001506,-11.225321 z"
id="path117495"
transform="scale(0.26458333)" />
</g>
<g
inkscape:groupmode="layer"
id="layer4"
inkscape:label="half circle"
style="display:inline">
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="M 50.729651,51.530407 C 50.309622,50.995658 45.365237,45.839438 39.74213,40.072138 L 29.518298,29.586142 32.436819,26.865215 C 37.858508,21.810591 46.002106,15.887672 52.91436,11.971698 62.082793,6.7775379 75.058024,2.4602175 86.866814,0.67450934 92.666822,-0.20255914 104.7089,-0.37259245 110.83899,0.33602379 133.4335,2.9478667 150.81881,11.108766 167.8709,27.107589 l 2.76147,2.590896 -11.424,11.400559 -11.42399,11.400559 -2.65118,-2.175966 C 132.57167,40.013706 117.00056,34.697228 99.348504,34.691269 c -17.588857,-0.0059 -30.84176,4.583432 -44.420073,15.382297 -1.679878,1.336011 -3.139997,2.429113 -3.244707,2.429113 -0.104711,0 -0.534044,-0.437522 -0.954073,-0.972272 z"
id="path117300"
transform="scale(0.26458333)" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="m 86.479787,86.656574 -12.860268,-12.86389 1.72756,-1.257012 c 5.92724,-4.312793 15.575223,-7.833372 22.587211,-8.242144 5.50807,-0.321098 12.64715,1.227498 19.18821,4.162273 2.41292,1.082605 8.21218,4.752294 8.21218,5.196553 0,0.223831 -14.54007,14.745171 -22.63164,22.602487 l -3.362985,3.26562 z"
id="path117339"
transform="scale(0.26458333)" />
<path
style="opacity:0.98;fill:#5f5fd3;fill-opacity:1;stroke-width:3.81317;stroke-linecap:round;stroke-miterlimit:0.4"
d="m 43.323744,182.02493 c -3.01377,-2.14036 -8.648648,-6.71423 -11.329522,-9.19625 l -2.145795,-1.98662 2.929747,-3.04309 c 1.611361,-1.6737 17.298698,-17.50163 34.860748,-35.17319 l 31.931002,-32.13009 35.244626,35.24359 35.24463,35.2436 -2.29809,2.12652 c -3.22978,2.98865 -8.20792,6.96547 -11.21429,8.95861 l -2.55224,1.69205 -2.04396,-1.77268 c -1.12418,-0.97498 -2.34704,-2.10872 -2.71748,-2.51941 l -0.67351,-0.74673 H 99.52196 50.484308 l -2.199537,2.47487 c -1.209746,1.36118 -2.306828,2.46959 -2.437961,2.46312 -0.131132,-0.006 -1.266512,-0.7419 -2.523066,-1.6343 z m 82.364486,-25.84451 c 0,-0.14683 -5.88666,-6.15201 -13.08147,-13.34485 L 99.52528,129.75768 86.443805,142.83557 c -7.194812,7.19284 -13.081476,13.19802 -13.081476,13.34485 0,0.14683 11.773328,0.26696 26.162951,0.26696 14.38962,0 26.16295,-0.12013 26.16295,-0.26696 z"
id="path117378"
transform="scale(0.26458333)" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -445,8 +445,11 @@
$('#showOfflineNumber').text(`(${offlineCount})`);
}
// Now apply UI filter based on toggles
// Now apply UI filter based on toggles (always keep root)
const filteredDevices = allDevices.filter(device => {
const isRoot = (device.devMac || '').toLowerCase() === 'internet';
if (isRoot) return true;
if (!showArchived && parseInt(device.devIsArchived) === 1) return false;
if (!showOffline && parseInt(device.devPresentLastScan) === 0) return false;
return true;
@@ -569,6 +572,11 @@ function getChildren(node, list, path, visited = [])
// ---------------------------------------------------------------------------
function getHierarchy()
{
// reset counters before rebuilding the hierarchy
leafNodesCount = 0;
visibleNodesCount = 0;
parentNodesCount = 0;
let internetNode = null;
for(i in deviceListGlobal)
@@ -709,18 +717,23 @@ function initTree(myHierarchy)
// calculate the drawing area based on the tree width and available screen size
let baseFontSize = parseFloat($('html').css('font-size'));
let treeAreaHeight = ($(window).height() - 155); ;
let minNodeWidth = 60 // min safe node width not breaking the tree
// calculate the font size of the leaf nodes to fit everything into the tree area
leafNodesCount == 0 ? 1 : leafNodesCount;
emSize = pxToEm((treeAreaHeight/(leafNodesCount)).toFixed(2));
let screenWidthEm = pxToEm($('.networkTable').width()-15);
// let screenWidthEm = pxToEm($('.networkTable').width()-15);
let minTreeWidthPx = parentNodesCount * minNodeWidth;
let actualWidthPx = $('.networkTable').width() - 15;
// init the drawing area size
$("#networkTree").attr('style', `height:${treeAreaHeight}px; width:${emToPx(screenWidthEm)}px`)
let finalWidthPx = Math.max(actualWidthPx, minTreeWidthPx);
// handle canvas and node size if only a few nodes
// override original value
let screenWidthEm = pxToEm(finalWidthPx);
// handle canvas and node size if only a few nodes
emSize > 1 ? emSize = 1 : emSize = emSize;
let nodeHeightPx = emToPx(emSize*1);
@@ -728,6 +741,12 @@ function initTree(myHierarchy)
// handle if only a few nodes
nodeWidthPx > 160 ? nodeWidthPx = 160 : nodeWidthPx = nodeWidthPx;
if (nodeWidthPx < minNodeWidth) nodeWidthPx = minNodeWidth; // minimum safe width
console.log("Calculated nodeWidthPx =", nodeWidthPx, "emSize =", emSize , " screenWidthEm:", screenWidthEm, " emToPx(screenWidthEm):" , emToPx(screenWidthEm));
// init the drawing area size
$("#networkTree").attr('style', `height:${treeAreaHeight}px; width:${emToPx(screenWidthEm)}px`)
console.log(Treeviz);

View File

@@ -1,7 +1,6 @@
## Overview
A plugin allowing for importing devices from the PiHole database. This is an import plugin using an SQLite database as a source.
A plugin allowing for importing devices from the PiHole database. This is an import plugin using an SQLite database as a source.
### Usage
@@ -9,3 +8,44 @@ A plugin allowing for importing devices from the PiHole database. This is an imp
- `PIHOLE_RUN` is used to enable the import by setting it e.g. to `schedule` or `once` (pre-set to `disabled`)
- `PIHOLE_RUN_SCHD` is to configure how often the plugin is executed if `PIHOLE_RUN` is set to `schedule` (pre-set to every 30 min)
- `PIHOLE_DB_PATH` setting must match the location of your PiHole database (pre-set to `/etc/pihole/pihole-FTL.db`)
## Troubleshooting
### Permission problems:
NetAlertX cannot read Pi-hole DB (`/etc/pihole/pihole-FTL.db`) due to permissions:
```
[Plugins] ⚠ ERROR: ATTACH DATABASE failed with SQL ERROR: unable to open database: /etc/pihole/pihole-FTL.db
```
#### Solution:
1. **Mount full Pi-hole directory (read-only):**
```yaml
volumes:
- /etc/pihole:/etc/pihole:ro
```
2. **Add NetAlertX to Pi-hole group:**
```yaml
group_add:
- 1001 # check with `getent group pihole`
```
**Verify:**
```bash
docker exec -it netalertx id
# groups=1001,... ✅ pihole group included
```
#### Notes:
* Avoid mounting single DB files.
* Keep mount read-only (`:ro`) to protect Pi-hole data.
* Use `group_add` instead of chmod.

View File

View File

@@ -7,8 +7,8 @@ if [ ! -f "${NETALERTX_CONFIG}/app.conf" ]; then
>&2 echo "ERROR: Failed to create config directory ${NETALERTX_CONFIG}"
exit 1
}
cp /app/back/app.conf "${NETALERTX_CONFIG}/app.conf" || {
>&2 echo "ERROR: Failed to copy default config to ${NETALERTX_CONFIG}/app.conf"
install -m 600 -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} /app/back/app.conf "${NETALERTX_CONFIG}/app.conf" || {
>&2 echo "ERROR: Failed to deploy default config to ${NETALERTX_CONFIG}/app.conf"
exit 2
}
RESET=$(printf '\033[0m')

View File

@@ -1,32 +1,57 @@
#!/bin/sh
# This script checks if the database file exists, and if not, creates it with the initial schema.
# It is intended to be run at the first start of the application.
# Ensures the database exists, or creates a new one on first run.
# Intended to run only at initial startup.
# If ALWAYS_FRESH_INSTALL is true, remove the database to force a rebuild.
if [ "${ALWAYS_FRESH_INSTALL}" = "true" ]; then
if [ -f "${NETALERTX_DB_FILE}" ]; then
# Provide feedback to the user.
>&2 echo "INFO: ALWAYS_FRESH_INSTALL is true. Removing existing database to force a fresh installation."
rm -f "${NETALERTX_DB_FILE}" "${NETALERTX_DB_FILE}-shm" "${NETALERTX_DB_FILE}-wal"
set -eu
YELLOW=$(printf '\033[1;33m')
CYAN=$(printf '\033[1;36m')
RED=$(printf '\033[1;31m')
RESET=$(printf '\033[0m')
# Ensure DB folder exists
if [ ! -d "${NETALERTX_DB}" ]; then
if ! mkdir -p "${NETALERTX_DB}"; then
>&2 printf "%s" "${RED}"
>&2 cat <<EOF
══════════════════════════════════════════════════════════════════════════════
❌ Error creating DB folder in: ${NETALERTX_DB}
A database directory is required for proper operation, however there appear to be
insufficient permissions on this mount or it is otherwise inaccessible.
More info: https://github.com/jokob-sk/NetAlertX/blob/main/docs/FILE_PERMISSIONS.md
══════════════════════════════════════════════════════════════════════════════
EOF
>&2 printf "%s" "${RESET}"
exit 1
fi
# Otherwise, if the db exists, exit.
elif [ -f "${NETALERTX_DB_FILE}" ]; then
chmod 700 "${NETALERTX_DB}" 2>/dev/null || true
fi
# Fresh rebuild requested
if [ "${ALWAYS_FRESH_INSTALL:-false}" = "true" ] && [ -f "${NETALERTX_DB_FILE}" ]; then
>&2 echo "INFO: ALWAYS_FRESH_INSTALL enabled — removing existing database."
rm -f "${NETALERTX_DB_FILE}" "${NETALERTX_DB_FILE}-shm" "${NETALERTX_DB_FILE}-wal"
fi
# If file exists now, nothing to do
if [ -f "${NETALERTX_DB_FILE}" ]; then
exit 0
fi
CYAN=$(printf '\033[1;36m')
RESET=$(printf '\033[0m')
>&2 printf "%s" "${CYAN}"
>&2 cat <<EOF
══════════════════════════════════════════════════════════════════════════════
🆕 First run detected. Building initial database schema in ${NETALERTX_DB_FILE}.
🆕 First run detected — building initial database at: ${NETALERTX_DB_FILE}
Do not interrupt this step. Once complete, consider backing up the fresh
database before onboarding sensitive networks.
Do not interrupt this step. When complete, consider backing up the fresh
DB before onboarding sensitive or critical networks.
══════════════════════════════════════════════════════════════════════════════
EOF
>&2 printf "%s" "${RESET}"
# Write all text to db file until we see "end-of-database-schema"
sqlite3 "${NETALERTX_DB_FILE}" <<'end-of-database-schema'
CREATE TABLE Events (eve_MAC STRING (50) NOT NULL COLLATE NOCASE, eve_IP STRING (50) NOT NULL COLLATE NOCASE, eve_DateTime DATETIME NOT NULL, eve_EventType STRING (30) NOT NULL COLLATE NOCASE, eve_AdditionalInfo STRING (250) DEFAULT (''), eve_PendingAlertEmail BOOLEAN NOT NULL CHECK (eve_PendingAlertEmail IN (0, 1)) DEFAULT (1), eve_PairEventRowid INTEGER);
@@ -72,8 +97,9 @@ CREATE TABLE Devices (
devSite TEXT,
devSSID TEXT,
devSyncHubNode TEXT,
devSourcePlugin TEXT
, "devCustomProps" TEXT);
devSourcePlugin TEXT,
devFQDN TEXT,
"devCustomProps" TEXT);
CREATE TABLE IF NOT EXISTS "Settings" (
"setKey" TEXT,
"setName" TEXT,
@@ -91,7 +117,7 @@ CREATE TABLE IF NOT EXISTS "Parameters" (
);
CREATE TABLE Plugins_Objects(
"Index" INTEGER,
Plugin TEXT NOT NULL,
Plugin TEXT NOT NULL,
Object_PrimaryID TEXT NOT NULL,
Object_SecondaryID TEXT NOT NULL,
DateTimeCreated TEXT NOT NULL,
@@ -164,7 +190,7 @@ CREATE TABLE Plugins_Language_Strings(
Extra TEXT NOT NULL,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE CurrentScan (
CREATE TABLE CurrentScan (
cur_MAC STRING(50) NOT NULL COLLATE NOCASE,
cur_IP STRING(50) NOT NULL COLLATE NOCASE,
cur_Vendor STRING(250),
@@ -191,11 +217,11 @@ CREATE TABLE IF NOT EXISTS "AppEvents" (
"ObjectPrimaryID" TEXT,
"ObjectSecondaryID" TEXT,
"ObjectForeignKey" TEXT,
"ObjectIndex" TEXT,
"ObjectIsNew" BOOLEAN,
"ObjectIsArchived" BOOLEAN,
"ObjectIndex" TEXT,
"ObjectIsNew" BOOLEAN,
"ObjectIsArchived" BOOLEAN,
"ObjectStatusColumn" TEXT,
"ObjectStatus" TEXT,
"ObjectStatus" TEXT,
"AppEventType" TEXT,
"Helper1" TEXT,
"Helper2" TEXT,
@@ -233,21 +259,21 @@ CREATE INDEX IDX_dev_Favorite ON Devices (devFavorite);
CREATE INDEX IDX_dev_LastIP ON Devices (devLastIP);
CREATE INDEX IDX_dev_NewDevice ON Devices (devIsNew);
CREATE INDEX IDX_dev_Archived ON Devices (devIsArchived);
CREATE VIEW Events_Devices AS
SELECT *
FROM Events
CREATE VIEW Events_Devices AS
SELECT *
FROM Events
LEFT JOIN Devices ON eve_MAC = devMac
/* Events_Devices(eve_MAC,eve_IP,eve_DateTime,eve_EventType,eve_AdditionalInfo,eve_PendingAlertEmail,eve_PairEventRowid,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps) */;
CREATE VIEW LatestEventsPerMAC AS
WITH RankedEvents AS (
SELECT
SELECT
e.*,
ROW_NUMBER() OVER (PARTITION BY e.eve_MAC ORDER BY e.eve_DateTime DESC) AS row_num
FROM Events AS e
)
SELECT
e.*,
d.*,
SELECT
e.*,
d.*,
c.*
FROM RankedEvents AS e
LEFT JOIN Devices AS d ON e.eve_MAC = d.devMac
@@ -286,11 +312,11 @@ CREATE VIEW Convert_Events_to_Sessions AS SELECT EVE1.eve_MAC,
CREATE TRIGGER "trg_insert_devices"
AFTER INSERT ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = NEW.devGUID
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'insert'
)
BEGIN
@@ -311,18 +337,18 @@ CREATE TRIGGER "trg_insert_devices"
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
,
DATETIME('now'),
FALSE,
'Devices',
NEW.devGUID, -- ObjectGUID
NEW.devMac, -- ObjectPrimaryID
NEW.devLastIP, -- ObjectSecondaryID
@@ -338,11 +364,11 @@ CREATE TRIGGER "trg_insert_devices"
CREATE TRIGGER "trg_update_devices"
AFTER UPDATE ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = NEW.devGUID
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'update'
)
BEGIN
@@ -363,18 +389,18 @@ CREATE TRIGGER "trg_update_devices"
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
,
DATETIME('now'),
FALSE,
'Devices',
NEW.devGUID, -- ObjectGUID
NEW.devMac, -- ObjectPrimaryID
NEW.devLastIP, -- ObjectSecondaryID
@@ -390,11 +416,11 @@ CREATE TRIGGER "trg_update_devices"
CREATE TRIGGER "trg_delete_devices"
AFTER DELETE ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = OLD.devGUID
AND ObjectStatus = CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND ObjectStatus = CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'delete'
)
BEGIN
@@ -415,18 +441,18 @@ CREATE TRIGGER "trg_delete_devices"
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
,
DATETIME('now'),
FALSE,
'Devices',
OLD.devGUID, -- ObjectGUID
OLD.devMac, -- ObjectPrimaryID
OLD.devLastIP, -- ObjectSecondaryID

View File

@@ -0,0 +1,35 @@
#!/bin/sh
# override-config.sh - Handles APP_CONF_OVERRIDE environment variable
OVERRIDE_FILE="${NETALERTX_CONFIG}/app_conf_override.json"
# Ensure config directory exists
mkdir -p "$(dirname "$NETALERTX_CONFIG")" || {
>&2 echo "ERROR: Failed to create config directory $(dirname "$NETALERTX_CONFIG")"
exit 1
}
# Remove old override file if it exists
rm -f "$OVERRIDE_FILE"
# Check if APP_CONF_OVERRIDE is set
if [ -z "$APP_CONF_OVERRIDE" ]; then
>&2 echo "APP_CONF_OVERRIDE is not set. Skipping override config file creation."
else
# Save the APP_CONF_OVERRIDE env variable as a JSON file
echo "$APP_CONF_OVERRIDE" > "$OVERRIDE_FILE" || {
>&2 echo "ERROR: Failed to write override config to $OVERRIDE_FILE"
exit 2
}
RESET=$(printf '\033[0m')
>&2 cat <<EOF
══════════════════════════════════════════════════════════════════════════════
📝 APP_CONF_OVERRIDE detected. Configuration written to $OVERRIDE_FILE.
Make sure the JSON content is correct before starting the application.
══════════════════════════════════════════════════════════════════════════════
EOF
>&2 printf "%s" "${RESET}"
fi

View File

@@ -5,22 +5,22 @@
# Define ports from ENV variables, applying defaults
PORT_APP=${PORT:-20211}
PORT_GQL=${APP_CONF_OVERRIDE:-${GRAPHQL_PORT:-20212}}
# PORT_GQL=${APP_CONF_OVERRIDE:-${GRAPHQL_PORT:-20212}}
# Check if ports are configured to be the same
if [ "$PORT_APP" -eq "$PORT_GQL" ]; then
cat <<EOF
══════════════════════════════════════════════════════════════════════════════
⚠️ Configuration Warning: Both ports are set to ${PORT_APP}.
# # Check if ports are configured to be the same
# if [ "$PORT_APP" -eq "$PORT_GQL" ]; then
# cat <<EOF
# ══════════════════════════════════════════════════════════════════════════════
# ⚠️ Configuration Warning: Both ports are set to ${PORT_APP}.
The Application port (\$PORT) and the GraphQL API port
(\$APP_CONF_OVERRIDE or \$GRAPHQL_PORT) are configured to use the
same port. This will cause a conflict.
# The Application port (\$PORT) and the GraphQL API port
# (\$APP_CONF_OVERRIDE or \$GRAPHQL_PORT) are configured to use the
# same port. This will cause a conflict.
https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
══════════════════════════════════════════════════════════════════════════════
EOF
fi
# https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
# ══════════════════════════════════════════════════════════════════════════════
# EOF
# fi
# Check for netstat (usually provided by busybox)
if ! command -v netstat >/dev/null 2>&1; then
@@ -53,17 +53,17 @@ if echo "$LISTENING_PORTS" | grep -q ":${PORT_APP}$"; then
EOF
fi
# Check GraphQL Port
# We add a check to avoid double-warning if ports are identical AND in use
if [ "$PORT_APP" -ne "$PORT_GQL" ] && echo "$LISTENING_PORTS" | grep -q ":${PORT_GQL}$"; then
cat <<EOF
══════════════════════════════════════════════════════════════════════════════
⚠️ Port Warning: GraphQL API port ${PORT_GQL} is already in use.
# # Check GraphQL Port
# # We add a check to avoid double-warning if ports are identical AND in use
# if [ "$PORT_APP" -ne "$PORT_GQL" ] && echo "$LISTENING_PORTS" | grep -q ":${PORT_GQL}$"; then
# cat <<EOF
# ══════════════════════════════════════════════════════════════════════════════
# ⚠️ Port Warning: GraphQL API port ${PORT_GQL} is already in use.
The GraphQL API (defined by \$APP_CONF_OVERRIDE or \$GRAPHQL_PORT)
may fail to start.
# The GraphQL API (defined by \$APP_CONF_OVERRIDE or \$GRAPHQL_PORT)
# may fail to start.
https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
══════════════════════════════════════════════════════════════════════════════
EOF
fi
# https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
# ══════════════════════════════════════════════════════════════════════════════
# EOF
# fi

View File

@@ -30,4 +30,3 @@ urllib3
httplib2
gunicorn
git+https://github.com/foreign-sub/aiofreepybox.git
mcp

View File

@@ -2,19 +2,12 @@ import threading
import sys
import os
from flask import Flask, request, jsonify, Response, stream_with_context
import json
import uuid
import queue
import requests
import logging
from datetime import datetime, timedelta
from models.device_instance import DeviceInstance # noqa: E402
from flask import Flask, request, jsonify, Response
from flask_cors import CORS
# Register NetAlertX directories
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
sys.path.extend([f"{INSTALL_PATH}/server"])
from logger import mylog # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
@@ -70,9 +63,6 @@ from .dbquery_endpoint import read_query, write_query, update_query, delete_quer
from .sync_endpoint import handle_sync_post, handle_sync_get # noqa: E402 [flake8 lint suppression]
from .logs_endpoint import clean_log # noqa: E402 [flake8 lint suppression]
from models.user_events_queue_instance import UserEventsQueueInstance # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.plugin_object_instance import PluginObjectInstance # noqa: E402 [flake8 lint suppression]
from plugin_helper import is_mac # noqa: E402 [flake8 lint suppression]
from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
write_notification,
mark_all_notifications_read,
@@ -81,14 +71,9 @@ from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
delete_notification,
mark_notification_as_read
)
from .tools_routes import openapi_spec as tools_openapi_spec # noqa: E402 [flake8 lint suppression]
# tools and mcp routes have been moved into this module (api_server_start)
# Flask application
app = Flask(__name__)
# Register Blueprints
# No separate blueprints for tools or mcp - routes are registered below
CORS(
app,
resources={
@@ -103,220 +88,16 @@ CORS(
r"/messaging/*": {"origins": "*"},
r"/events/*": {"origins": "*"},
r"/logs/*": {"origins": "*"},
r"/api/tools/*": {"origins": "*"}
r"/auth/*": {"origins": "*"}
},
supports_credentials=True,
allow_headers=["Authorization", "Content-Type"],
)
# -----------------------------------------------
# DB model instances for helper usage
# -----------------------------------------------
db_helper = DB()
db_helper.open()
device_handler = DeviceInstance(db_helper)
plugin_object_handler = PluginObjectInstance(db_helper)
# -------------------------------------------------------------------------------
# MCP bridge variables + helpers (moved from mcp_routes)
# -------------------------------------------------------------------------------
mcp_sessions = {}
mcp_sessions_lock = threading.Lock()
mcp_openapi_spec_cache = None
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
API_BASE_URL = f"http://localhost:{BACKEND_PORT}/api/tools"
def get_openapi_spec_local():
global mcp_openapi_spec_cache
if mcp_openapi_spec_cache:
return mcp_openapi_spec_cache
try:
resp = requests.get(f"{API_BASE_URL}/openapi.json", timeout=10)
resp.raise_for_status()
mcp_openapi_spec_cache = resp.json()
return mcp_openapi_spec_cache
except Exception as e:
mylog('minimal', [f"Error fetching OpenAPI spec: {e}"])
return None
def map_openapi_to_mcp_tools(spec):
tools = []
if not spec or 'paths' not in spec:
return tools
for path, methods in spec['paths'].items():
for method, details in methods.items():
if 'operationId' in details:
tool = {
'name': details['operationId'],
'description': details.get('description', details.get('summary', '')),
'inputSchema': {'type': 'object', 'properties': {}, 'required': []},
}
if 'requestBody' in details:
content = details['requestBody'].get('content', {})
if 'application/json' in content:
schema = content['application/json'].get('schema', {})
tool['inputSchema'] = schema.copy()
if 'properties' not in tool['inputSchema']:
tool['inputSchema']['properties'] = {}
if 'parameters' in details:
for param in details['parameters']:
if param.get('in') == 'query':
tool['inputSchema']['properties'][param['name']] = {
'type': param.get('schema', {}).get('type', 'string'),
'description': param.get('description', ''),
}
if param.get('required'):
tool['inputSchema'].setdefault('required', []).append(param['name'])
tools.append(tool)
return tools
def process_mcp_request(data):
method = data.get('method')
msg_id = data.get('id')
response = None
if method == 'initialize':
response = {
'jsonrpc': '2.0',
'id': msg_id,
'result': {
'protocolVersion': '2024-11-05',
'capabilities': {'tools': {}},
'serverInfo': {'name': 'NetAlertX', 'version': '1.0.0'},
},
}
elif method == 'notifications/initialized':
pass
elif method == 'tools/list':
spec = get_openapi_spec_local()
tools = map_openapi_to_mcp_tools(spec)
response = {'jsonrpc': '2.0', 'id': msg_id, 'result': {'tools': tools}}
elif method == 'tools/call':
params = data.get('params', {})
tool_name = params.get('name')
tool_args = params.get('arguments', {})
spec = get_openapi_spec_local()
target_path = None
target_method = None
if spec and 'paths' in spec:
for path, methods in spec['paths'].items():
for m, details in methods.items():
if details.get('operationId') == tool_name:
target_path = path
target_method = m.upper()
break
if target_path:
break
if target_path:
try:
headers = {'Content-Type': 'application/json'}
if 'Authorization' in request.headers:
headers['Authorization'] = request.headers['Authorization']
url = f"{API_BASE_URL}{target_path}"
if target_method == 'POST':
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
elif target_method == 'GET':
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
else:
api_res = None
if api_res:
content = []
try:
json_content = api_res.json()
content.append({'type': 'text', 'text': json.dumps(json_content, indent=2)})
except Exception:
content.append({'type': 'text', 'text': api_res.text})
is_error = api_res.status_code >= 400
response = {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': content, 'isError': is_error}}
else:
response = {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': f"Method {target_method} not supported"}}
except Exception as e:
response = {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': [{'type': 'text', 'text': f"Error calling tool: {str(e)}"}], 'isError': True}}
else:
response = {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': f"Tool {tool_name} not found"}}
elif method == 'ping':
response = {'jsonrpc': '2.0', 'id': msg_id, 'result': {}}
else:
if msg_id:
response = {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': 'Method not found'}}
return response
@app.route('/api/mcp/sse', methods=['GET', 'POST'])
def api_mcp_sse():
if request.method == 'POST':
try:
data = request.get_json(silent=True)
if data and 'method' in data and 'jsonrpc' in data:
response = process_mcp_request(data)
if response:
return jsonify(response)
else:
return '', 202
except Exception as e:
logging.getLogger(__name__).debug(f'SSE POST processing error: {e}')
return jsonify({'status': 'ok', 'message': 'MCP SSE endpoint active'}), 200
session_id = uuid.uuid4().hex
q = queue.Queue()
with mcp_sessions_lock:
mcp_sessions[session_id] = q
def stream():
yield f"event: endpoint\ndata: /api/mcp/messages?session_id={session_id}\n\n"
try:
while True:
try:
message = q.get(timeout=20)
yield f"event: message\ndata: {json.dumps(message)}\n\n"
except queue.Empty:
yield ": keep-alive\n\n"
except GeneratorExit:
with mcp_sessions_lock:
if session_id in mcp_sessions:
del mcp_sessions[session_id]
return Response(stream_with_context(stream()), mimetype='text/event-stream')
@app.route('/api/mcp/messages', methods=['POST'])
def api_mcp_messages():
session_id = request.args.get('session_id')
if not session_id:
return jsonify({"error": "Missing session_id"}), 400
with mcp_sessions_lock:
if session_id not in mcp_sessions:
return jsonify({"error": "Session not found"}), 404
q = mcp_sessions[session_id]
data = request.json
if not data:
return jsonify({"error": "Invalid JSON"}), 400
response = process_mcp_request(data)
if response:
q.put(response)
return jsonify({"status": "accepted"}), 202
# -------------------------------------------------------------------
# Custom handler for 404 - Route not found
# -------------------------------------------------------------------
@app.before_request
def log_request_info():
"""Log details of every incoming request."""
# Filter out noisy requests if needed, but user asked for drastic logging
mylog("verbose", [f"[HTTP] {request.method} {request.path} from {request.remote_addr}"])
# Filter sensitive headers before logging
safe_headers = {k: v for k, v in request.headers if k.lower() not in ('authorization', 'cookie', 'x-api-key')}
mylog("debug", [f"[HTTP] Headers: {safe_headers}"])
if request.method == "POST":
# Be careful with large bodies, but log first 1000 chars
data = request.get_data(as_text=True)
mylog("debug", [f"[HTTP] Body length: {len(data)} chars"])
@app.errorhandler(404)
def not_found(error):
response = {
@@ -365,183 +146,6 @@ def graphql_endpoint():
return jsonify(response)
# --------------------------
# Tools endpoints (moved from tools_routes)
# --------------------------
@app.route('/api/tools/trigger_scan', methods=['POST'])
def api_trigger_scan():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json() or {}
scan_type = data.get('scan_type', 'nmap_fast')
# Map requested scan type to plugin prefix
plugin_prefix = None
if scan_type in ['nmap_fast', 'nmap_deep']:
plugin_prefix = 'NMAPDEV'
elif scan_type == 'arp':
plugin_prefix = 'ARPSCAN'
else:
return jsonify({"error": "Invalid scan_type. Must be 'arp', 'nmap_fast', or 'nmap_deep'"}), 400
queue_instance = UserEventsQueueInstance()
action = f"run|{plugin_prefix}"
success, message = queue_instance.add_event(action)
if success:
return jsonify({"success": True, "message": f"Triggered plugin {plugin_prefix} via ad-hoc queue."})
else:
return jsonify({"success": False, "error": message}), 500
@app.route('/api/tools/list_devices', methods=['POST'])
def api_tools_list_devices():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
return get_all_devices()
@app.route('/api/tools/get_device_info', methods=['POST'])
def api_tools_get_device_info():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
query = data.get('query')
if not query:
return jsonify({"error": "Missing 'query' parameter"}), 400
# if MAC -> device endpoint
if is_mac(query):
return get_device_data(query)
# search by name or IP
matches = device_handler.search(query)
if not matches:
return jsonify({"message": "No devices found"}), 404
return jsonify(matches)
@app.route('/api/tools/get_latest_device', methods=['POST'])
def api_tools_get_latest_device():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
latest = device_handler.getLatest()
if not latest:
return jsonify({"message": "No devices found"}), 404
return jsonify([latest])
@app.route('/api/tools/get_open_ports', methods=['POST'])
def api_tools_get_open_ports():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
target = data.get('target')
if not target:
return jsonify({"error": "Target is required"}), 400
# If MAC is provided, use plugin objects to get port entries
if is_mac(target):
entries = plugin_object_handler.getByPrimary('NMAP', target.lower())
open_ports = []
for e in entries:
try:
port = int(e.get('Object_SecondaryID', 0))
except (ValueError, TypeError):
continue
service = e.get('Watched_Value2', 'unknown')
open_ports.append({"port": port, "service": service})
return jsonify({"success": True, "target": target, "open_ports": open_ports, "raw": entries})
# If IP provided, try to resolve to MAC and proceed
# Use device handler to resolve IP
device = device_handler.getByIP(target)
if device and device.get('devMac'):
mac = device.get('devMac')
entries = plugin_object_handler.getByPrimary('NMAP', mac.lower())
open_ports = []
for e in entries:
try:
port = int(e.get('Object_SecondaryID', 0))
except (ValueError, TypeError):
continue
service = e.get('Watched_Value2', 'unknown')
open_ports.append({"port": port, "service": service})
return jsonify({"success": True, "target": target, "open_ports": open_ports, "raw": entries})
# No plugin data found; as fallback use nettools nmap_scan (may run subprocess)
# Note: Prefer plugin data (NMAP) when available
res = nmap_scan(target, 'fast')
return res
@app.route('/api/tools/get_network_topology', methods=['GET'])
def api_tools_get_network_topology():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
topo = device_handler.getNetworkTopology()
return jsonify(topo)
@app.route('/api/tools/get_recent_alerts', methods=['POST'])
def api_tools_get_recent_alerts():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
hours = int(data.get('hours', 24))
# Reuse get_events() - which returns a Flask response with JSON containing 'events'
res = get_events()
events_json = res.get_json() if hasattr(res, 'get_json') else None
events = events_json.get('events', []) if events_json else []
cutoff = datetime.now() - timedelta(hours=hours)
filtered = [e for e in events if 'eve_DateTime' in e and datetime.strptime(e['eve_DateTime'], '%Y-%m-%d %H:%M:%S') > cutoff]
return jsonify(filtered)
@app.route('/api/tools/set_device_alias', methods=['POST'])
def api_tools_set_device_alias():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
mac = data.get('mac')
alias = data.get('alias')
if not mac or not alias:
return jsonify({"error": "MAC and Alias are required"}), 400
return update_device_column(mac, 'devName', alias)
@app.route('/api/tools/wol_wake_device', methods=['POST'])
def api_tools_wol_wake_device():
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
mac = data.get('mac')
ip = data.get('ip')
if not mac and not ip:
return jsonify({"error": "MAC or IP is required"}), 400
# Resolve IP to MAC if needed
if not mac and ip:
device = device_handler.getByIP(ip)
if not device or not device.get('devMac'):
return jsonify({"error": f"Could not resolve MAC for IP {ip}"}), 404
mac = device.get('devMac')
# Validate mac using is_mac helper
if not is_mac(mac):
return jsonify({"success": False, "error": f"Invalid MAC: {mac}"}), 400
return wakeonlan(mac)
@app.route('/api/tools/openapi.json', methods=['GET'])
def api_tools_openapi_spec():
# Minimal OpenAPI spec for tools
spec = {
"openapi": "3.0.0",
"info": {"title": "NetAlertX Tools", "version": "1.1.0"},
"servers": [{"url": "/api/tools"}],
"paths": {}
}
return jsonify(spec)
# --------------------------
# Settings Endpoints
# --------------------------
@@ -1189,9 +793,3 @@ def start_server(graphql_port, app_state):
# Update the state to indicate the server has started
app_state = updateState("Process: Idle", None, None, None, 1)
if __name__ == "__main__":
# This block is for running the server directly for testing purposes
# In production, start_server is called from api.py
pass

View File

@@ -1,304 +0,0 @@
"""MCP bridge routes exposing NetAlertX tool endpoints via JSON-RPC."""
import json
import uuid
import queue
import requests
import threading
import logging
from flask import Blueprint, request, Response, stream_with_context, jsonify
from helper import get_setting_value
mcp_bp = Blueprint('mcp', __name__)
# Store active sessions: session_id -> Queue
sessions = {}
sessions_lock = threading.Lock()
# Cache for OpenAPI spec to avoid fetching on every request
openapi_spec_cache = None
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
API_BASE_URL = f"http://localhost:{BACKEND_PORT}/api/tools"
def get_openapi_spec():
"""Fetch and cache the tools OpenAPI specification from the local API server."""
global openapi_spec_cache
if openapi_spec_cache:
return openapi_spec_cache
try:
# Fetch from local server
# We use localhost because this code runs on the server
response = requests.get(f"{API_BASE_URL}/openapi.json", timeout=10)
response.raise_for_status()
openapi_spec_cache = response.json()
return openapi_spec_cache
except Exception as e:
print(f"Error fetching OpenAPI spec: {e}")
return None
def map_openapi_to_mcp_tools(spec):
"""Convert OpenAPI paths into MCP tool descriptors."""
tools = []
if not spec or "paths" not in spec:
return tools
for path, methods in spec["paths"].items():
for method, details in methods.items():
if "operationId" in details:
tool = {
"name": details["operationId"],
"description": details.get("description", details.get("summary", "")),
"inputSchema": {
"type": "object",
"properties": {},
"required": []
}
}
# Extract parameters from requestBody if present
if "requestBody" in details:
content = details["requestBody"].get("content", {})
if "application/json" in content:
schema = content["application/json"].get("schema", {})
tool["inputSchema"] = schema.copy()
if "properties" not in tool["inputSchema"]:
tool["inputSchema"]["properties"] = {}
if "required" not in tool["inputSchema"]:
tool["inputSchema"]["required"] = []
# Extract parameters from 'parameters' list (query/path params) - simplistic support
if "parameters" in details:
for param in details["parameters"]:
if param.get("in") == "query":
tool["inputSchema"]["properties"][param["name"]] = {
"type": param.get("schema", {}).get("type", "string"),
"description": param.get("description", "")
}
if param.get("required"):
if "required" not in tool["inputSchema"]:
tool["inputSchema"]["required"] = []
tool["inputSchema"]["required"].append(param["name"])
tools.append(tool)
return tools
def process_mcp_request(data):
"""Handle incoming MCP JSON-RPC requests and route them to tools."""
method = data.get("method")
msg_id = data.get("id")
response = None
if method == "initialize":
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {}
},
"serverInfo": {
"name": "NetAlertX",
"version": "1.0.0"
}
}
}
elif method == "notifications/initialized":
# No response needed for notification
pass
elif method == "tools/list":
spec = get_openapi_spec()
tools = map_openapi_to_mcp_tools(spec)
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"tools": tools
}
}
elif method == "tools/call":
params = data.get("params", {})
tool_name = params.get("name")
tool_args = params.get("arguments", {})
# Find the endpoint for this tool
spec = get_openapi_spec()
target_path = None
target_method = None
if spec and "paths" in spec:
for path, methods in spec["paths"].items():
for m, details in methods.items():
if details.get("operationId") == tool_name:
target_path = path
target_method = m.upper()
break
if target_path:
break
if target_path:
try:
# Make the request to the local API
# We forward the Authorization header from the incoming request if present
headers = {
"Content-Type": "application/json"
}
if "Authorization" in request.headers:
headers["Authorization"] = request.headers["Authorization"]
url = f"{API_BASE_URL}{target_path}"
if target_method == "POST":
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
elif target_method == "GET":
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
else:
api_res = None
if api_res:
content = []
try:
json_content = api_res.json()
content.append({
"type": "text",
"text": json.dumps(json_content, indent=2)
})
except (ValueError, json.JSONDecodeError):
content.append({
"type": "text",
"text": api_res.text
})
is_error = api_res.status_code >= 400
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"content": content,
"isError": is_error
}
}
else:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": f"Method {target_method} not supported"}
}
except Exception as e:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"content": [{"type": "text", "text": f"Error calling tool: {str(e)}"}],
"isError": True
}
}
else:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": f"Tool {tool_name} not found"}
}
elif method == "ping":
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {}
}
else:
# Unknown method
if msg_id: # Only respond if it's a request (has id)
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": "Method not found"}
}
return response
@mcp_bp.route('/sse', methods=['GET', 'POST'])
def handle_sse():
"""Expose an SSE endpoint that streams MCP responses to connected clients."""
if request.method == 'POST':
# Handle verification or keep-alive pings
try:
data = request.get_json(silent=True)
if data and "method" in data and "jsonrpc" in data:
response = process_mcp_request(data)
if response:
return jsonify(response)
else:
# Notification or no response needed
return "", 202
except Exception as e:
# Log but don't fail - malformed requests shouldn't crash the endpoint
logging.getLogger(__name__).debug(f"SSE POST processing error: {e}")
return jsonify({"status": "ok", "message": "MCP SSE endpoint active"}), 200
session_id = uuid.uuid4().hex
q = queue.Queue()
with sessions_lock:
sessions[session_id] = q
def stream():
"""Yield SSE messages for queued MCP responses until the client disconnects."""
# Send the endpoint event
# The client should POST to /api/mcp/messages?session_id=<session_id>
yield f"event: endpoint\ndata: /api/mcp/messages?session_id={session_id}\n\n"
try:
while True:
try:
# Wait for messages
message = q.get(timeout=20) # Keep-alive timeout
yield f"event: message\ndata: {json.dumps(message)}\n\n"
except queue.Empty:
# Send keep-alive comment
yield ": keep-alive\n\n"
except GeneratorExit:
with sessions_lock:
if session_id in sessions:
del sessions[session_id]
return Response(stream_with_context(stream()), mimetype='text/event-stream')
@mcp_bp.route('/messages', methods=['POST'])
def handle_messages():
"""Receive MCP JSON-RPC messages and enqueue responses for an SSE session."""
session_id = request.args.get('session_id')
if not session_id:
return jsonify({"error": "Missing session_id"}), 400
with sessions_lock:
if session_id not in sessions:
return jsonify({"error": "Session not found"}), 404
q = sessions[session_id]
data = request.json
if not data:
return jsonify({"error": "Invalid JSON"}), 400
response = process_mcp_request(data)
if response:
q.put(response)
return jsonify({"status": "accepted"}), 202

View File

@@ -1,686 +0,0 @@
import subprocess
import re
from datetime import datetime, timedelta
from flask import Blueprint, request, jsonify
import sqlite3
from helper import get_setting_value
from database import get_temp_db_connection
tools_bp = Blueprint('tools', __name__)
def check_auth():
"""Check API_TOKEN authorization."""
token = request.headers.get("Authorization")
expected_token = f"Bearer {get_setting_value('API_TOKEN')}"
return token == expected_token
@tools_bp.route('/trigger_scan', methods=['POST'])
def trigger_scan():
"""
Forces NetAlertX to run a specific scan type immediately.
Arguments: scan_type (Enum: arp, nmap_fast, nmap_deep), target (optional IP/CIDR)
"""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json()
scan_type = data.get('scan_type', 'nmap_fast')
target = data.get('target')
# Validate scan_type
if scan_type not in ['arp', 'nmap_fast', 'nmap_deep']:
return jsonify({"error": "Invalid scan_type. Must be 'arp', 'nmap_fast', or 'nmap_deep'"}), 400
# Determine command
cmd = []
if scan_type == 'arp':
# ARP scan usually requires sudo or root, assuming container runs as root or has caps
cmd = ["arp-scan", "--localnet", "--interface=eth0"] # Defaulting to eth0, might need detection
if target:
cmd = ["arp-scan", target]
elif scan_type == 'nmap_fast':
cmd = ["nmap", "-F"]
if target:
cmd.append(target)
else:
# Default to local subnet if possible, or error if not easily determined
# For now, let's require target for nmap if not easily deducible,
# or try to get it from settings.
# NetAlertX usually knows its subnet.
# Let's try to get the scan subnet from settings if not provided.
scan_subnets = get_setting_value("SCAN_SUBNETS")
if scan_subnets:
# Take the first one for now
cmd.append(scan_subnets.split(',')[0].strip())
else:
return jsonify({"error": "Target is required and no default SCAN_SUBNETS found"}), 400
elif scan_type == 'nmap_deep':
cmd = ["nmap", "-A", "-T4"]
if target:
cmd.append(target)
else:
scan_subnets = get_setting_value("SCAN_SUBNETS")
if scan_subnets:
cmd.append(scan_subnets.split(',')[0].strip())
else:
return jsonify({"error": "Target is required and no default SCAN_SUBNETS found"}), 400
try:
# Run the command
result = subprocess.run(
cmd,
capture_output=True,
text=True,
check=True
)
return jsonify({
"success": True,
"scan_type": scan_type,
"command": " ".join(cmd),
"output": result.stdout.strip().split('\n')
})
except subprocess.CalledProcessError as e:
return jsonify({
"success": False,
"error": "Scan failed",
"details": e.stderr.strip()
}), 500
except Exception as e:
return jsonify({"error": str(e)}), 500
@tools_bp.route('/list_devices', methods=['POST'])
def list_devices():
"""List all devices."""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
conn = get_temp_db_connection()
conn.row_factory = sqlite3.Row
cur = conn.cursor()
try:
cur.execute("SELECT devName, devMac, devLastIP as devIP, devVendor, devFirstConnection, devLastConnection FROM Devices ORDER BY devFirstConnection DESC")
rows = cur.fetchall()
devices = [dict(row) for row in rows]
return jsonify(devices)
except Exception as e:
return jsonify({"error": str(e)}), 500
finally:
conn.close()
@tools_bp.route('/get_device_info', methods=['POST'])
def get_device_info():
"""Get detailed info for a specific device."""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json()
if not data or 'query' not in data:
return jsonify({"error": "Missing 'query' parameter"}), 400
query = data['query']
conn = get_temp_db_connection()
conn.row_factory = sqlite3.Row
cur = conn.cursor()
try:
# Search by MAC, Name, or partial IP
sql = "SELECT * FROM Devices WHERE devMac LIKE ? OR devName LIKE ? OR devLastIP LIKE ?"
cur.execute(sql, (f"%{query}%", f"%{query}%", f"%{query}%"))
rows = cur.fetchall()
if not rows:
return jsonify({"message": "No devices found"}), 404
devices = [dict(row) for row in rows]
return jsonify(devices)
except Exception as e:
return jsonify({"error": str(e)}), 500
finally:
conn.close()
@tools_bp.route('/get_latest_device', methods=['POST'])
def get_latest_device():
"""Get full details of the most recently discovered device."""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
conn = get_temp_db_connection()
conn.row_factory = sqlite3.Row
cur = conn.cursor()
try:
# Get the device with the most recent devFirstConnection
cur.execute("SELECT * FROM Devices ORDER BY devFirstConnection DESC LIMIT 1")
row = cur.fetchone()
if not row:
return jsonify({"message": "No devices found"}), 404
# Return as a list to be consistent with other endpoints
return jsonify([dict(row)])
except Exception as e:
return jsonify({"error": str(e)}), 500
finally:
conn.close()
@tools_bp.route('/get_open_ports', methods=['POST'])
def get_open_ports():
"""
Specific query for the port-scan results of a target.
Arguments: target (IP or MAC)
"""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json()
target = data.get('target')
if not target:
return jsonify({"error": "Target is required"}), 400
# If MAC is provided, try to resolve to IP
if re.match(r"^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$", target):
conn = get_temp_db_connection()
conn.row_factory = sqlite3.Row
cur = conn.cursor()
try:
cur.execute("SELECT devLastIP FROM Devices WHERE devMac = ?", (target,))
row = cur.fetchone()
if row and row['devLastIP']:
target = row['devLastIP']
else:
return jsonify({"error": f"Could not resolve IP for MAC {target}"}), 404
finally:
conn.close()
try:
# Run nmap -F for fast port scan
cmd = ["nmap", "-F", target]
result = subprocess.run(
cmd,
capture_output=True,
text=True,
check=True,
timeout=120
)
# Parse output for open ports
open_ports = []
for line in result.stdout.split('\n'):
if '/tcp' in line and 'open' in line:
parts = line.split('/')
port = parts[0].strip()
service = line.split()[2] if len(line.split()) > 2 else "unknown"
open_ports.append({"port": int(port), "service": service})
return jsonify({
"success": True,
"target": target,
"open_ports": open_ports,
"raw_output": result.stdout.strip().split('\n')
})
except subprocess.CalledProcessError as e:
return jsonify({"success": False, "error": "Port scan failed", "details": e.stderr.strip()}), 500
except Exception as e:
return jsonify({"error": str(e)}), 500
@tools_bp.route('/get_network_topology', methods=['GET'])
def get_network_topology():
"""
Returns the "Parent/Child" relationships.
"""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
conn = get_temp_db_connection()
conn.row_factory = sqlite3.Row
cur = conn.cursor()
try:
cur.execute("SELECT devName, devMac, devParentMAC, devParentPort, devVendor FROM Devices")
rows = cur.fetchall()
nodes = []
links = []
for row in rows:
nodes.append({
"id": row['devMac'],
"name": row['devName'],
"vendor": row['devVendor']
})
if row['devParentMAC']:
links.append({
"source": row['devParentMAC'],
"target": row['devMac'],
"port": row['devParentPort']
})
return jsonify({
"nodes": nodes,
"links": links
})
except Exception as e:
return jsonify({"error": str(e)}), 500
finally:
conn.close()
@tools_bp.route('/get_recent_alerts', methods=['POST'])
def get_recent_alerts():
"""
Fetches the last N system alerts.
Arguments: hours (lookback period, default 24)
"""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json()
hours = data.get('hours', 24)
conn = get_temp_db_connection()
conn.row_factory = sqlite3.Row
cur = conn.cursor()
try:
# Calculate cutoff time
cutoff = datetime.now() - timedelta(hours=int(hours))
cutoff_str = cutoff.strftime('%Y-%m-%d %H:%M:%S')
cur.execute("""
SELECT eve_DateTime, eve_EventType, eve_MAC, eve_IP, devName
FROM Events
LEFT JOIN Devices ON Events.eve_MAC = Devices.devMac
WHERE eve_DateTime > ?
ORDER BY eve_DateTime DESC
""", (cutoff_str,))
rows = cur.fetchall()
alerts = [dict(row) for row in rows]
return jsonify(alerts)
except Exception as e:
return jsonify({"error": str(e)}), 500
finally:
conn.close()
@tools_bp.route('/set_device_alias', methods=['POST'])
def set_device_alias():
"""
Updates the name (alias) of a device.
Arguments: mac, alias
"""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json()
mac = data.get('mac')
alias = data.get('alias')
if not mac or not alias:
return jsonify({"error": "MAC and Alias are required"}), 400
conn = get_temp_db_connection()
cur = conn.cursor()
try:
cur.execute("UPDATE Devices SET devName = ? WHERE devMac = ?", (alias, mac))
conn.commit()
if cur.rowcount == 0:
return jsonify({"error": "Device not found"}), 404
return jsonify({"success": True, "message": f"Device {mac} renamed to {alias}"})
except Exception as e:
return jsonify({"error": str(e)}), 500
finally:
conn.close()
@tools_bp.route('/wol_wake_device', methods=['POST'])
def wol_wake_device():
"""
Sends a Wake-on-LAN magic packet.
Arguments: mac OR ip
"""
if not check_auth():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json()
mac = data.get('mac')
ip = data.get('ip')
if not mac and not ip:
return jsonify({"error": "MAC address or IP address is required"}), 400
# Resolve IP to MAC if MAC is missing
if not mac and ip:
conn = get_temp_db_connection()
conn.row_factory = sqlite3.Row
cur = conn.cursor()
try:
# Try to find device by IP (devLastIP)
cur.execute("SELECT devMac FROM Devices WHERE devLastIP = ?", (ip,))
row = cur.fetchone()
if row and row['devMac']:
mac = row['devMac']
else:
return jsonify({"error": f"Could not resolve MAC for IP {ip}"}), 404
except Exception as e:
return jsonify({"error": f"Database error: {str(e)}"}), 500
finally:
conn.close()
# Validate MAC
if not re.match(r"^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$", mac):
return jsonify({"success": False, "error": f"Invalid MAC: {mac}"}), 400
try:
# Using wakeonlan command
result = subprocess.run(
["wakeonlan", mac], capture_output=True, text=True, check=True, timeout=10
)
return jsonify(
{
"success": True,
"message": f"WOL packet sent to {mac}",
"output": result.stdout.strip(),
}
)
except subprocess.CalledProcessError as e:
return jsonify(
{
"success": False,
"error": "Failed to send WOL packet",
"details": e.stderr.strip(),
}
), 500
@tools_bp.route('/openapi.json', methods=['GET'])
def openapi_spec():
"""Return OpenAPI specification for tools."""
# No auth required for spec to allow easy import, or require it if preferred.
# Open WebUI usually needs to fetch spec without auth first or handles it.
# We'll allow public access to spec for simplicity of import.
spec = {
"openapi": "3.0.0",
"info": {
"title": "NetAlertX Tools",
"description": "API for NetAlertX device management tools",
"version": "1.1.0"
},
"servers": [
{"url": "/api/tools"}
],
"paths": {
"/list_devices": {
"post": {
"summary": "List all devices (Summary)",
"description": (
"Retrieve a SUMMARY list of all devices, sorted by newest first. "
"IMPORTANT: This only provides basic info (Name, IP, Vendor). "
"For FULL details (like custom props, alerts, etc.), you MUST use 'get_device_info' or 'get_latest_device'."
),
"operationId": "list_devices",
"responses": {
"200": {
"description": "List of devices (Summary)",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"type": "object",
"properties": {
"devName": {"type": "string"},
"devMac": {"type": "string"},
"devIP": {"type": "string"},
"devVendor": {"type": "string"},
"devStatus": {"type": "string"},
"devFirstConnection": {"type": "string"},
"devLastConnection": {"type": "string"}
}
}
}
}
}
}
}
}
},
"/get_device_info": {
"post": {
"summary": "Get device info (Full Details)",
"description": (
"Get COMPREHENSIVE information about a specific device by MAC, Name, or partial IP. "
"Use this to see all available properties, alerts, and metadata not shown in the list."
),
"operationId": "get_device_info",
"requestBody": {
"required": True,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "MAC address, Device Name, or partial IP to search for"
}
},
"required": ["query"]
}
}
}
},
"responses": {
"200": {
"description": "Device details (Full)",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {"type": "object"}
}
}
}
},
"404": {"description": "Device not found"}
}
}
},
"/get_latest_device": {
"post": {
"summary": "Get latest device (Full Details)",
"description": "Get COMPREHENSIVE information about the most recently discovered device (latest devFirstConnection).",
"operationId": "get_latest_device",
"responses": {
"200": {
"description": "Latest device details (Full)",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {"type": "object"}
}
}
}
},
"404": {"description": "No devices found"}
}
}
},
"/trigger_scan": {
"post": {
"summary": "Trigger Active Scan",
"description": "Forces NetAlertX to run a specific scan type immediately.",
"operationId": "trigger_scan",
"requestBody": {
"required": True,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"scan_type": {
"type": "string",
"enum": ["arp", "nmap_fast", "nmap_deep"],
"default": "nmap_fast"
},
"target": {
"type": "string",
"description": "IP address or CIDR to scan"
}
}
}
}
}
},
"responses": {
"200": {"description": "Scan started/completed successfully"},
"400": {"description": "Invalid input"}
}
}
},
"/get_open_ports": {
"post": {
"summary": "Get Open Ports",
"description": "Specific query for the port-scan results of a target.",
"operationId": "get_open_ports",
"requestBody": {
"required": True,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"target": {
"type": "string",
"description": "IP or MAC address"
}
},
"required": ["target"]
}
}
}
},
"responses": {
"200": {"description": "List of open ports"},
"404": {"description": "Target not found"}
}
}
},
"/get_network_topology": {
"get": {
"summary": "Get Network Topology",
"description": "Returns the Parent/Child relationships for network visualization.",
"operationId": "get_network_topology",
"responses": {
"200": {"description": "Graph data (nodes and links)"}
}
}
},
"/get_recent_alerts": {
"post": {
"summary": "Get Recent Alerts",
"description": "Fetches the last N system alerts.",
"operationId": "get_recent_alerts",
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"hours": {
"type": "integer",
"default": 24
}
}
}
}
}
},
"responses": {
"200": {"description": "List of alerts"}
}
}
},
"/set_device_alias": {
"post": {
"summary": "Set Device Alias",
"description": "Updates the name (alias) of a device.",
"operationId": "set_device_alias",
"requestBody": {
"required": True,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"mac": {"type": "string"},
"alias": {"type": "string"}
},
"required": ["mac", "alias"]
}
}
}
},
"responses": {
"200": {"description": "Alias updated"},
"404": {"description": "Device not found"}
}
}
},
"/wol_wake_device": {
"post": {
"summary": "Wake on LAN",
"description": "Sends a Wake-on-LAN magic packet to the target MAC or IP. If IP is provided, it resolves to MAC first.",
"operationId": "wol_wake_device",
"requestBody": {
"required": True,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"mac": {"type": "string", "description": "Target MAC address"},
"ip": {"type": "string", "description": "Target IP address (resolves to MAC)"}
}
}
}
}
},
"responses": {
"200": {"description": "WOL packet sent"},
"404": {"description": "IP not found"}
}
}
}
},
"components": {
"securitySchemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT"
}
}
},
"security": [
{"bearerAuth": []}
]
}
return jsonify(spec)

View File

@@ -57,44 +57,6 @@ class DeviceInstance:
result = self.db.sql.fetchone()
return result["count"] > 0
# Get a device by its last IP address
def getByIP(self, ip):
self.db.sql.execute("SELECT * FROM Devices WHERE devLastIP = ?", (ip,))
row = self.db.sql.fetchone()
return dict(row) if row else None
# Search devices by partial mac, name or IP
def search(self, query):
like = f"%{query}%"
self.db.sql.execute(
"SELECT * FROM Devices WHERE devMac LIKE ? OR devName LIKE ? OR devLastIP LIKE ?",
(like, like, like),
)
rows = self.db.sql.fetchall()
return [dict(r) for r in rows]
# Get the most recently discovered device
def getLatest(self):
self.db.sql.execute("SELECT * FROM Devices ORDER BY devFirstConnection DESC LIMIT 1")
row = self.db.sql.fetchone()
return dict(row) if row else None
def getNetworkTopology(self):
"""Returns nodes and links for the current Devices table.
Nodes: {id, name, vendor}
Links: {source, target, port}
"""
self.db.sql.execute("SELECT devName, devMac, devParentMAC, devParentPort, devVendor FROM Devices")
rows = self.db.sql.fetchall()
nodes = []
links = []
for row in rows:
nodes.append({"id": row['devMac'], "name": row['devName'], "vendor": row['devVendor']})
if row['devParentMAC']:
links.append({"source": row['devParentMAC'], "target": row['devMac'], "port": row['devParentPort']})
return {"nodes": nodes, "links": links}
# Update a specific field for a device
def updateField(self, devGUID, field, value):
if not self.exists(devGUID):

View File

@@ -37,15 +37,6 @@ class PluginObjectInstance:
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,))
return self.db.sql.fetchall()
# Get plugin objects by primary ID and plugin name
def getByPrimary(self, plugin, primary_id):
self.db.sql.execute(
"SELECT * FROM Plugins_Objects WHERE Plugin = ? AND Object_PrimaryID = ?",
(plugin, primary_id),
)
rows = self.db.sql.fetchall()
return [dict(r) for r in rows]
# Get objects by status
def getByStatus(self, status):
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Status = ?", (status,))

View File

@@ -1,287 +0,0 @@
import sys
import os
import pytest
from unittest.mock import patch, MagicMock
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from helper import get_setting_value # noqa: E402
from api_server.api_server_start import app # noqa: E402
@pytest.fixture(scope="session")
def api_token():
return get_setting_value("API_TOKEN")
@pytest.fixture
def client():
with app.test_client() as client:
yield client
def auth_headers(token):
return {"Authorization": f"Bearer {token}"}
# --- get_device_info Tests ---
@patch('api_server.tools_routes.get_temp_db_connection')
def test_get_device_info_ip_partial(mock_db_conn, client, api_token):
"""Test get_device_info with partial IP search."""
mock_cursor = MagicMock()
# Mock return of a device with IP ending in .50
mock_cursor.fetchall.return_value = [
{"devName": "Test Device", "devMac": "AA:BB:CC:DD:EE:FF", "devLastIP": "192.168.1.50"}
]
mock_db_conn.return_value.cursor.return_value = mock_cursor
payload = {"query": ".50"}
response = client.post('/api/tools/get_device_info',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
devices = response.get_json()
assert len(devices) == 1
assert devices[0]["devLastIP"] == "192.168.1.50"
# Verify SQL query included 3 params (MAC, Name, IP)
args, _ = mock_cursor.execute.call_args
assert args[0].count("?") == 3
assert len(args[1]) == 3
# --- trigger_scan Tests ---
@patch('subprocess.run')
def test_trigger_scan_nmap_fast(mock_run, client, api_token):
"""Test trigger_scan with nmap_fast."""
mock_run.return_value = MagicMock(stdout="Scan completed", returncode=0)
payload = {"scan_type": "nmap_fast", "target": "192.168.1.1"}
response = client.post('/api/tools/trigger_scan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert "nmap -F 192.168.1.1" in data["command"]
mock_run.assert_called_once()
@patch('subprocess.run')
def test_trigger_scan_invalid_type(mock_run, client, api_token):
"""Test trigger_scan with invalid scan_type."""
payload = {"scan_type": "invalid_type", "target": "192.168.1.1"}
response = client.post('/api/tools/trigger_scan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 400
mock_run.assert_not_called()
# --- get_open_ports Tests ---
@patch('subprocess.run')
def test_get_open_ports_ip(mock_run, client, api_token):
"""Test get_open_ports with an IP address."""
mock_output = """
Starting Nmap 7.80 ( https://nmap.org ) at 2023-10-27 10:00 UTC
Nmap scan report for 192.168.1.1
Host is up (0.0010s latency).
Not shown: 98 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
"""
mock_run.return_value = MagicMock(stdout=mock_output, returncode=0)
payload = {"target": "192.168.1.1"}
response = client.post('/api/tools/get_open_ports',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert len(data["open_ports"]) == 2
assert data["open_ports"][0]["port"] == 22
assert data["open_ports"][1]["service"] == "http"
@patch('api_server.tools_routes.get_temp_db_connection')
@patch('subprocess.run')
def test_get_open_ports_mac_resolve(mock_run, mock_db_conn, client, api_token):
"""Test get_open_ports with a MAC address that resolves to an IP."""
# Mock DB to resolve MAC to IP
mock_cursor = MagicMock()
mock_cursor.fetchone.return_value = {"devLastIP": "192.168.1.50"}
mock_db_conn.return_value.cursor.return_value = mock_cursor
# Mock Nmap output
mock_run.return_value = MagicMock(stdout="80/tcp open http", returncode=0)
payload = {"target": "AA:BB:CC:DD:EE:FF"}
response = client.post('/api/tools/get_open_ports',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["target"] == "192.168.1.50" # Should be resolved IP
mock_run.assert_called_once()
args, _ = mock_run.call_args
assert "192.168.1.50" in args[0]
# --- get_network_topology Tests ---
@patch('api_server.tools_routes.get_temp_db_connection')
def test_get_network_topology(mock_db_conn, client, api_token):
"""Test get_network_topology."""
mock_cursor = MagicMock()
mock_cursor.fetchall.return_value = [
{"devName": "Router", "devMac": "AA:AA:AA:AA:AA:AA", "devParentMAC": None, "devParentPort": None, "devVendor": "VendorA"},
{"devName": "Device1", "devMac": "BB:BB:BB:BB:BB:BB", "devParentMAC": "AA:AA:AA:AA:AA:AA", "devParentPort": "eth1", "devVendor": "VendorB"}
]
mock_db_conn.return_value.cursor.return_value = mock_cursor
response = client.get('/api/tools/get_network_topology',
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert len(data["nodes"]) == 2
assert len(data["links"]) == 1
assert data["links"][0]["source"] == "AA:AA:AA:AA:AA:AA"
assert data["links"][0]["target"] == "BB:BB:BB:BB:BB:BB"
# --- get_recent_alerts Tests ---
@patch('api_server.tools_routes.get_temp_db_connection')
def test_get_recent_alerts(mock_db_conn, client, api_token):
"""Test get_recent_alerts."""
mock_cursor = MagicMock()
mock_cursor.fetchall.return_value = [
{"eve_DateTime": "2023-10-27 10:00:00", "eve_EventType": "New Device", "eve_MAC": "CC:CC:CC:CC:CC:CC", "eve_IP": "192.168.1.100", "devName": "Unknown"}
]
mock_db_conn.return_value.cursor.return_value = mock_cursor
payload = {"hours": 24}
response = client.post('/api/tools/get_recent_alerts',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert len(data) == 1
assert data[0]["eve_EventType"] == "New Device"
# --- set_device_alias Tests ---
@patch('api_server.tools_routes.get_temp_db_connection')
def test_set_device_alias(mock_db_conn, client, api_token):
"""Test set_device_alias."""
mock_cursor = MagicMock()
mock_cursor.rowcount = 1 # Simulate successful update
mock_db_conn.return_value.cursor.return_value = mock_cursor
payload = {"mac": "AA:BB:CC:DD:EE:FF", "alias": "New Name"}
response = client.post('/api/tools/set_device_alias',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
@patch('api_server.tools_routes.get_temp_db_connection')
def test_set_device_alias_not_found(mock_db_conn, client, api_token):
"""Test set_device_alias when device is not found."""
mock_cursor = MagicMock()
mock_cursor.rowcount = 0 # Simulate no rows updated
mock_db_conn.return_value.cursor.return_value = mock_cursor
payload = {"mac": "AA:BB:CC:DD:EE:FF", "alias": "New Name"}
response = client.post('/api/tools/set_device_alias',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 404
# --- wol_wake_device Tests ---
@patch('subprocess.run')
def test_wol_wake_device(mock_subprocess, client, api_token):
"""Test wol_wake_device."""
mock_subprocess.return_value.stdout = "Sending magic packet to 255.255.255.255:9 with AA:BB:CC:DD:EE:FF"
mock_subprocess.return_value.returncode = 0
payload = {"mac": "AA:BB:CC:DD:EE:FF"}
response = client.post('/api/tools/wol_wake_device',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
mock_subprocess.assert_called_with(["wakeonlan", "AA:BB:CC:DD:EE:FF"], capture_output=True, text=True, check=True)
@patch('api_server.tools_routes.get_temp_db_connection')
@patch('subprocess.run')
def test_wol_wake_device_by_ip(mock_subprocess, mock_db_conn, client, api_token):
"""Test wol_wake_device with IP address."""
# Mock DB for IP resolution
mock_cursor = MagicMock()
mock_cursor.fetchone.return_value = {"devMac": "AA:BB:CC:DD:EE:FF"}
mock_db_conn.return_value.cursor.return_value = mock_cursor
# Mock subprocess
mock_subprocess.return_value.stdout = "Sending magic packet to 255.255.255.255:9 with AA:BB:CC:DD:EE:FF"
mock_subprocess.return_value.returncode = 0
payload = {"ip": "192.168.1.50"}
response = client.post('/api/tools/wol_wake_device',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert "AA:BB:CC:DD:EE:FF" in data["message"]
# Verify DB lookup
mock_cursor.execute.assert_called_with("SELECT devMac FROM Devices WHERE devLastIP = ?", ("192.168.1.50",))
# Verify subprocess call
mock_subprocess.assert_called_with(["wakeonlan", "AA:BB:CC:DD:EE:FF"], capture_output=True, text=True, check=True)
def test_wol_wake_device_invalid_mac(client, api_token):
"""Test wol_wake_device with invalid MAC."""
payload = {"mac": "invalid-mac"}
response = client.post('/api/tools/wol_wake_device',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 400
# --- openapi_spec Tests ---
def test_openapi_spec(client):
"""Test openapi_spec endpoint contains new paths."""
response = client.get('/api/tools/openapi.json')
assert response.status_code == 200
spec = response.get_json()
# Check for new endpoints
assert "/trigger_scan" in spec["paths"]
assert "/get_open_ports" in spec["paths"]
assert "/get_network_topology" in spec["paths"]
assert "/get_recent_alerts" in spec["paths"]
assert "/set_device_alias" in spec["paths"]
assert "/wol_wake_device" in spec["paths"]

View File

@@ -1,79 +0,0 @@
import sys
import os
import pytest
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from api_server.api_server_start import app # noqa: E402 [flake8 lint suppression]
@pytest.fixture(scope="session")
def api_token():
return get_setting_value("API_TOKEN")
@pytest.fixture
def client():
with app.test_client() as client:
yield client
def auth_headers(token):
return {"Authorization": f"Bearer {token}"}
def test_openapi_spec(client):
"""Test OpenAPI spec endpoint."""
response = client.get('/api/tools/openapi.json')
assert response.status_code == 200
spec = response.get_json()
assert "openapi" in spec
assert "info" in spec
assert "paths" in spec
assert "/list_devices" in spec["paths"]
assert "/get_device_info" in spec["paths"]
def test_list_devices(client, api_token):
"""Test list_devices endpoint."""
response = client.post('/api/tools/list_devices', headers=auth_headers(api_token))
assert response.status_code == 200
devices = response.get_json()
assert isinstance(devices, list)
# If there are devices, check structure
if devices:
device = devices[0]
assert "devName" in device
assert "devMac" in device
def test_get_device_info(client, api_token):
"""Test get_device_info endpoint."""
# Test with a query that might not exist
payload = {"query": "nonexistent_device"}
response = client.post('/api/tools/get_device_info',
json=payload,
headers=auth_headers(api_token))
# Should return 404 if no match, or 200 with results
assert response.status_code in [200, 404]
if response.status_code == 200:
devices = response.get_json()
assert isinstance(devices, list)
elif response.status_code == 404:
# Expected for no matches
pass
def test_list_devices_unauthorized(client):
"""Test list_devices without authorization."""
response = client.post('/api/tools/list_devices')
assert response.status_code == 401
def test_get_device_info_unauthorized(client):
"""Test get_device_info without authorization."""
payload = {"query": "test"}
response = client.post('/api/tools/get_device_info', json=payload)
assert response.status_code == 401