Compare commits

..

10 Commits

Author SHA1 Message Date
Jokob @NetAlertX
5d1c63375b MCP refactor
Signed-off-by: GitHub <noreply@github.com>
2025-12-07 08:37:55 +00:00
Jokob @NetAlertX
8c982cd476 MCP refactor
Signed-off-by: GitHub <noreply@github.com>
2025-12-07 08:20:51 +00:00
Jokob @NetAlertX
36e5751221 Merge branch 'main' into fix-pr-1309 2025-12-01 09:34:59 +00:00
Jokob @NetAlertX
dfd836527e api endpoints updates 2025-12-01 08:52:50 +00:00
Jokob @NetAlertX
8d5a663817 DevInstance and PluginObjectInstance expansion 2025-12-01 08:27:14 +00:00
Adam Outler
e64c490c8a Help ARM runners on github with rust and cargo required by pip 2025-11-30 01:04:12 +00:00
Adam Outler
531b66effe Coderabit changes 2025-11-29 02:44:55 +00:00
Adam Outler
5e4ad10fe0 Tidy up 2025-11-28 21:13:20 +00:00
Adam Outler
541b932b6d Add MCP to existing OpenAPI 2025-11-28 14:12:06 -05:00
Adam Outler
2bf3ff9f00 Add MCP server 2025-11-28 17:03:18 +00:00
30 changed files with 1413 additions and 312 deletions

View File

@@ -25,7 +25,7 @@
// even within this container and connect to them as needed.
// "--network=host",
],
"mounts": [
"mounts": [
"source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" //used for testing various conditions in docker
],
// ATTENTION: If running with --network=host, COMMENT `forwardPorts` OR ELSE THERE WILL BE NO WEBUI!
@@ -88,7 +88,7 @@
}
},
"terminal.integrated.defaultProfile.linux": "zsh",
// Python testing configuration
"python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false,

View File

@@ -39,6 +39,7 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
## API/Endpoints quick map
- Flask app: `server/api_server/api_server_start.py` exposes routes like `/device/<mac>`, `/devices`, `/devices/export/{csv,json}`, `/devices/import`, `/devices/totals`, `/devices/by-status`, plus `nettools`, `events`, `sessions`, `dbquery`, `metrics`, `sync`.
- Authorization: all routes expect header `Authorization: Bearer <API_TOKEN>` via `get_setting_value('API_TOKEN')`.
- All responses need to return `"success":<False:True>` and if `False` an "error" message needs to be returned, e.g. `{"success": False, "error": f"No stored open ports for Device"}`
## Conventions & helpers to reuse
- Settings: add/modify via `ccd()` in `server/initialise.py` or perplugin manifest. Never hardcode ports or secrets; use `get_setting_value()`.
@@ -85,7 +86,7 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
- Above all, use the simplest possible code that meets the need so it can be easily audited and maintained.
- Always leave logging enabled. If there is a possiblity it will be difficult to debug with current logging, add more logging.
- Always run the testFailure tool before executing any tests to gather current failure information and avoid redundant runs.
- Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first.
- Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first.
- Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the exisiting testFailures and runTests results.
- Environment tools are designed specifically for your use in this project and running them in this order will give you the best results.
- Environment tools are designed specifically for your use in this project and running them in this order will give you the best results.

View File

@@ -26,7 +26,7 @@ ENV PATH="/opt/venv/bin:$PATH"
# Install build dependencies
COPY requirements.txt /tmp/requirements.txt
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git rust cargo \
&& python -m venv /opt/venv
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy

View File

@@ -112,11 +112,3 @@ Slowness can be caused by:
> See [Performance Tips](./PERFORMANCE.md) for detailed optimization steps.
#### IP flipping
With `ARPSCAN` scans some devices might flip IP addresses after each scan triggering false notifications. This is because some devices respond to broadcast calls and thus different IPs after scans are logged.
See how to prevent IP flipping in the [ARPSCAN plugin guide](/front/plugins/arp_scan/README.md).
Alternatively adjust your [notification settings](./NOTIFICATIONS.md) to prevent false positives by filtering out events or devices.

View File

@@ -61,38 +61,20 @@ See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/
| Required | Path | Description |
| :------------- | :------------- | :-------------|
| ✅ | `:/data` | Folder which needs to contain a `/db` and `/config` sub-folders. |
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is the same as on the server. |
| ✅ | `:/data` | Folder which will contain the `/db/app.db`, `/config/app.conf` & `/config/devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is teh same as on teh server. |
| | `:/tmp/log` | Logs folder useful for debugging if you have issues setting up the container |
| | `:/tmp/api` | The [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. Path configurable via `NETALERTX_API` environment variable. |
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
### Folder structure
Use separate `db` and `config` directories, do not nest them:
```
data
├── config
└── db
```
### Permissions
If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
```bash
sudo chown -R 20211:20211 /local_data_dir
sudo chmod -R a+rwx /local_data_dir
```
> Use separate `db` and `config` directories, do not nest them.
### Initial setup
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/data/config/` folder directly
#### Setting up scanners
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.

View File

@@ -278,9 +278,8 @@ Run the container with the `--user "0"` parameter. Please note, some systems wil
```sh
docker run -it --rm --name netalertx --user "0" \
-v /local_data_dir/config:/app/config \
-v /local_data_dir/db:/app/db \
-v /local_data_dir:/data \
-v /local_data_dir/config:/data/config \
-v /local_data_dir/db:/data/db \
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
ghcr.io/jokob-sk/netalertx:latest
```

View File

@@ -1,29 +1,8 @@
# Integration with PiHole
NetAlertX comes with 3 plugins suitable for integrating with your existing PiHole instance. The first plugin uses the v6 API, the second plugin is using a direct SQLite DB connection, the other leverages the `DHCP.leases` file generated by PiHole. You can combine multiple approaches and also supplement scans with other [plugins](/docs/PLUGINS.md).
NetAlertX comes with 2 plugins suitable for integrating with your existing PiHole instance. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other [plugins](/docs/PLUGINS.md).
## Approach 1: `PIHOLEAPI` Plugin - Import devices directly from PiHole v6 API
![PIHOLEAPI sample settings](./img/PIHOLE_GUIDE/PIHOLEAPI_settings.png)
To use this approach make sure the Web UI password in **Pi-hole** is set.
| Setting | Description | Recommended value |
| :------------- | :------------- | :-------------|
| `PIHOLEAPI_URL` | Your Pi-hole base URL including port. | `http://192.168.1.82:9880/` |
| `PIHOLEAPI_RUN_SCHD` | If you run multiple device scanner plugins, align the schedules of all plugins to the same value. | `*/5 * * * *` |
| `PIHOLEAPI_PASSWORD` | The Web UI base64 encoded (en-/decoding handled by the app) admin password. | `passw0rd` |
| `PIHOLEAPI_SSL_VERIFY` | Whether to verify HTTPS certificates. Disable only for self-signed certificates. | `False` |
| `PIHOLEAPI_API_MAXCLIENTS` | Maximum number of devices to request from Pi-hole. Defaults are usually fine. | `500` |
| `PIHOLEAPI_FAKE_MAC` | Generate FAKE MAC from IP. | `False` |
Check the [PiHole API plugin readme](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_api_scan/) for details and troubleshooting.
### docker-compose changes
No changes needed
## Approach 2: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file
## Approach 1: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file
![DHCPLSS sample settings](./img/PIHOLE_GUIDE/DHCPLSS_pihole_settings.png)
@@ -44,12 +23,12 @@ Check the [DHCPLSS plugin readme](https://github.com/jokob-sk/NetAlertX/tree/mai
| `:/etc/pihole/dhcp.leases` | PiHole's `dhcp.leases` file. Required if you want to use PiHole `dhcp.leases` file. This has to be matched with a corresponding `DHCPLSS_paths_to_check` setting entry (the path in the container must contain `pihole`) |
## Approach 3: `PIHOLE` Plugin - Import devices directly from the PiHole database
## Approach 2: `PIHOLE` Plugin - Import devices directly from the PiHole database
![DHCPLSS sample settings](./img/PIHOLE_GUIDE/PIHOLE_settings.png)
| Setting | Description | Recommended value |
| :------------- | :------------- | :-------------|
| :------------- | :------------- | :-------------|
| `PIHOLE_RUN` | When the plugin should run. | `schedule` |
| `PIHOLE_RUN_SCHD` | If you run multiple device scanner plugins, align the schedules of all plugins to the same value. | `*/5 * * * *` |
| `PIHOLE_DB_PATH` | You need to map the value in this setting in the `docker-compose.yml` file. | `/etc/pihole/pihole-FTL.db` |

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

View File

@@ -12,7 +12,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -98,9 +97,7 @@ def main():
{"devMac": "00:11:22:33:44:57", "devLastIP": "192.168.1.82"},
]
else:
db = DB()
db.open()
device_handler = DeviceInstance(db)
device_handler = DeviceInstance()
devices = (
device_handler.getAll()
if get_setting_value("REFRESH_FQDN")

View File

@@ -11,7 +11,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -38,15 +37,11 @@ def main():
timeout = get_setting_value('DIGSCAN_RUN_TIMEOUT')
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance
device_handler = DeviceInstance(db)
device_handler = DeviceInstance()
# Retrieve devices
if get_setting_value("REFRESH_FQDN"):

View File

@@ -15,7 +15,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -41,15 +40,11 @@ def main():
args = get_setting_value('ICMP_ARGS')
in_regex = get_setting_value('ICMP_IN_REGEX')
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance
device_handler = DeviceInstance(db)
device_handler = DeviceInstance()
# Retrieve devices
all_devices = device_handler.getAll()

View File

@@ -12,7 +12,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -40,15 +39,11 @@ def main():
# timeout = get_setting_value('NBLOOKUP_RUN_TIMEOUT')
timeout = 20
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance
device_handler = DeviceInstance(db)
device_handler = DeviceInstance()
# Retrieve devices
if get_setting_value("REFRESH_FQDN"):

View File

@@ -15,7 +15,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
from pytz import timezone # noqa: E402 [flake8 lint suppression]
@@ -39,15 +38,11 @@ def main():
timeout = get_setting_value('NSLOOKUP_RUN_TIMEOUT')
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance
device_handler = DeviceInstance(db)
device_handler = DeviceInstance()
# Retrieve devices
if get_setting_value("REFRESH_FQDN"):

View File

@@ -256,13 +256,11 @@ def main():
start_time = time.time()
mylog("verbose", [f"[{pluginName}] starting execution"])
from database import DB
from models.device_instance import DeviceInstance
db = DB() # instance of class DB
db.open()
# Create a DeviceInstance instance
device_handler = DeviceInstance(db)
device_handler = DeviceInstance()
# Retrieve configuration settings
# these should be self-explanatory
omada_sites = []

View File

@@ -13,7 +13,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
from const import logPath # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
from database import DB # noqa: E402 [flake8 lint suppression]
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
import conf # noqa: E402 [flake8 lint suppression]
@@ -44,12 +43,8 @@ def main():
mylog('verbose', [f'[{pluginName}] broadcast_ips value {broadcast_ips}'])
# Create a database connection
db = DB() # instance of class DB
db.open()
# Create a DeviceInstance instance
device_handler = DeviceInstance(db)
device_handler = DeviceInstance()
# Retrieve devices
if 'offline' in devices_to_wake:

View File

@@ -1,35 +0,0 @@
#!/bin/sh
# 02-ensure-folders.sh - ensure /config and /db exist under /data
set -eu
YELLOW=$(printf '\033[1;33m')
CYAN=$(printf '\033[1;36m')
RED=$(printf '\033[1;31m')
RESET=$(printf '\033[0m')
DATA_DIR=${NETALERTX_DATA:-/data}
TARGET_CONFIG=${NETALERTX_CONFIG:-${DATA_DIR}/config}
TARGET_DB=${NETALERTX_DB:-${DATA_DIR}/db}
ensure_folder() {
my_path="$1"
if [ ! -d "${my_path}" ]; then
>&2 printf "%s" "${CYAN}"
>&2 echo "Creating missing folder: ${my_path}"
>&2 printf "%s" "${RESET}"
mkdir -p "${my_path}" || {
>&2 printf "%s" "${RED}"
>&2 echo "❌ Failed to create folder: ${my_path}"
>&2 printf "%s" "${RESET}"
exit 1
}
chmod 700 "${my_path}" 2>/dev/null || true
fi
}
# Ensure subfolders exist
ensure_folder "${TARGET_CONFIG}"
ensure_folder "${TARGET_DB}"
exit 0

View File

@@ -1,35 +0,0 @@
#!/bin/sh
# override-config.sh - Handles APP_CONF_OVERRIDE environment variable
OVERRIDE_FILE="${NETALERTX_CONFIG}/app_conf_override.json"
# Ensure config directory exists
mkdir -p "$(dirname "$NETALERTX_CONFIG")" || {
>&2 echo "ERROR: Failed to create config directory $(dirname "$NETALERTX_CONFIG")"
exit 1
}
# Remove old override file if it exists
rm -f "$OVERRIDE_FILE"
# Check if APP_CONF_OVERRIDE is set
if [ -z "$APP_CONF_OVERRIDE" ]; then
>&2 echo "APP_CONF_OVERRIDE is not set. Skipping override config file creation."
else
# Save the APP_CONF_OVERRIDE env variable as a JSON file
echo "$APP_CONF_OVERRIDE" > "$OVERRIDE_FILE" || {
>&2 echo "ERROR: Failed to write override config to $OVERRIDE_FILE"
exit 2
}
RESET=$(printf '\033[0m')
>&2 cat <<EOF
══════════════════════════════════════════════════════════════════════════════
📝 APP_CONF_OVERRIDE detected. Configuration written to $OVERRIDE_FILE.
Make sure the JSON content is correct before starting the application.
══════════════════════════════════════════════════════════════════════════════
EOF
>&2 printf "%s" "${RESET}"
fi

View File

@@ -5,22 +5,22 @@
# Define ports from ENV variables, applying defaults
PORT_APP=${PORT:-20211}
# PORT_GQL=${APP_CONF_OVERRIDE:-${GRAPHQL_PORT:-20212}}
PORT_GQL=${APP_CONF_OVERRIDE:-${GRAPHQL_PORT:-20212}}
# # Check if ports are configured to be the same
# if [ "$PORT_APP" -eq "$PORT_GQL" ]; then
# cat <<EOF
# ══════════════════════════════════════════════════════════════════════════════
# ⚠️ Configuration Warning: Both ports are set to ${PORT_APP}.
# Check if ports are configured to be the same
if [ "$PORT_APP" -eq "$PORT_GQL" ]; then
cat <<EOF
══════════════════════════════════════════════════════════════════════════════
⚠️ Configuration Warning: Both ports are set to ${PORT_APP}.
# The Application port (\$PORT) and the GraphQL API port
# (\$APP_CONF_OVERRIDE or \$GRAPHQL_PORT) are configured to use the
# same port. This will cause a conflict.
The Application port (\$PORT) and the GraphQL API port
(\$APP_CONF_OVERRIDE or \$GRAPHQL_PORT) are configured to use the
same port. This will cause a conflict.
# https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
# ══════════════════════════════════════════════════════════════════════════════
# EOF
# fi
https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
══════════════════════════════════════════════════════════════════════════════
EOF
fi
# Check for netstat (usually provided by busybox)
if ! command -v netstat >/dev/null 2>&1; then
@@ -53,17 +53,17 @@ if echo "$LISTENING_PORTS" | grep -q ":${PORT_APP}$"; then
EOF
fi
# # Check GraphQL Port
# # We add a check to avoid double-warning if ports are identical AND in use
# if [ "$PORT_APP" -ne "$PORT_GQL" ] && echo "$LISTENING_PORTS" | grep -q ":${PORT_GQL}$"; then
# cat <<EOF
# ══════════════════════════════════════════════════════════════════════════════
# ⚠️ Port Warning: GraphQL API port ${PORT_GQL} is already in use.
# Check GraphQL Port
# We add a check to avoid double-warning if ports are identical AND in use
if [ "$PORT_APP" -ne "$PORT_GQL" ] && echo "$LISTENING_PORTS" | grep -q ":${PORT_GQL}$"; then
cat <<EOF
══════════════════════════════════════════════════════════════════════════════
⚠️ Port Warning: GraphQL API port ${PORT_GQL} is already in use.
# The GraphQL API (defined by \$APP_CONF_OVERRIDE or \$GRAPHQL_PORT)
# may fail to start.
The GraphQL API (defined by \$APP_CONF_OVERRIDE or \$GRAPHQL_PORT)
may fail to start.
# https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
# ══════════════════════════════════════════════════════════════════════════════
# EOF
# fi
https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
══════════════════════════════════════════════════════════════════════════════
EOF
fi

View File

@@ -30,3 +30,4 @@ urllib3
httplib2
gunicorn
git+https://github.com/foreign-sub/aiofreepybox.git
mcp

View File

@@ -3,11 +3,13 @@ import sys
import os
from flask import Flask, request, jsonify, Response
import requests
from models.device_instance import DeviceInstance # noqa: E402
from flask_cors import CORS
# Register NetAlertX directories
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/server"])
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from logger import mylog # noqa: E402 [flake8 lint suppression]
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
@@ -63,6 +65,12 @@ from .dbquery_endpoint import read_query, write_query, update_query, delete_quer
from .sync_endpoint import handle_sync_post, handle_sync_get # noqa: E402 [flake8 lint suppression]
from .logs_endpoint import clean_log # noqa: E402 [flake8 lint suppression]
from models.user_events_queue_instance import UserEventsQueueInstance # noqa: E402 [flake8 lint suppression]
from models.event_instance import EventInstance # noqa: E402 [flake8 lint suppression]
# Import tool logic from the MCP/tools module to reuse behavior (no blueprints)
from plugin_helper import is_mac # noqa: E402 [flake8 lint suppression]
# is_mac is provided in mcp_endpoint and used by those handlers
# mcp_endpoint contains helper functions; routes moved into this module to keep a single place for routes
from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
write_notification,
mark_all_notifications_read,
@@ -71,9 +79,17 @@ from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
delete_notification,
mark_notification_as_read
)
from .mcp_endpoint import ( # noqa: E402 [flake8 lint suppression]
mcp_sse,
mcp_messages,
openapi_spec
)
# tools and mcp routes have been moved into this module (api_server_start)
# Flask application
app = Flask(__name__)
CORS(
app,
resources={
@@ -88,16 +104,69 @@ CORS(
r"/messaging/*": {"origins": "*"},
r"/events/*": {"origins": "*"},
r"/logs/*": {"origins": "*"},
r"/auth/*": {"origins": "*"}
r"/api/tools/*": {"origins": "*"},
r"/auth/*": {"origins": "*"},
r"/mcp/*": {"origins": "*"}
},
supports_credentials=True,
allow_headers=["Authorization", "Content-Type"],
)
# -------------------------------------------------------------------------------
# MCP bridge variables + helpers (moved from mcp_routes)
# -------------------------------------------------------------------------------
mcp_openapi_spec_cache = None
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
API_BASE_URL = f"http://localhost:{BACKEND_PORT}"
def get_openapi_spec_local():
global mcp_openapi_spec_cache
if mcp_openapi_spec_cache:
return mcp_openapi_spec_cache
try:
resp = requests.get(f"{API_BASE_URL}/mcp/openapi.json", timeout=10)
resp.raise_for_status()
mcp_openapi_spec_cache = resp.json()
return mcp_openapi_spec_cache
except Exception as e:
mylog('minimal', [f"Error fetching OpenAPI spec: {e}"])
return None
@app.route('/mcp/sse', methods=['GET', 'POST'])
def api_mcp_sse():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
return mcp_sse()
@app.route('/api/mcp/messages', methods=['POST'])
def api_mcp_messages():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
return mcp_messages()
# -------------------------------------------------------------------
# Custom handler for 404 - Route not found
# -------------------------------------------------------------------
@app.before_request
def log_request_info():
"""Log details of every incoming request."""
# Filter out noisy requests if needed, but user asked for drastic logging
mylog("verbose", [f"[HTTP] {request.method} {request.path} from {request.remote_addr}"])
# Filter sensitive headers before logging
safe_headers = {k: v for k, v in request.headers if k.lower() not in ('authorization', 'cookie', 'x-api-key')}
mylog("debug", [f"[HTTP] Headers: {safe_headers}"])
if request.method == "POST":
# Be careful with large bodies, but log first 1000 chars
data = request.get_data(as_text=True)
mylog("debug", [f"[HTTP] Body length: {len(data)} chars"])
@app.errorhandler(404)
def not_found(error):
response = {
@@ -146,11 +215,12 @@ def graphql_endpoint():
return jsonify(response)
# Tools endpoints are registered via `mcp_endpoint.tools_bp` blueprint.
# --------------------------
# Settings Endpoints
# --------------------------
@app.route("/settings/<setKey>", methods=["GET"])
def api_get_setting(setKey):
if not is_authorized():
@@ -162,8 +232,7 @@ def api_get_setting(setKey):
# --------------------------
# Device Endpoints
# --------------------------
@app.route('/mcp/sse/device/<mac>', methods=['GET', 'POST'])
@app.route("/device/<mac>", methods=["GET"])
def api_get_device(mac):
if not is_authorized():
@@ -229,11 +298,45 @@ def api_update_device_column(mac):
return update_device_column(mac, column_name, column_value)
@app.route('/mcp/sse/device/<mac>/set-alias', methods=['POST'])
@app.route('/device/<mac>/set-alias', methods=['POST'])
def api_device_set_alias(mac):
"""Set the device alias - convenience wrapper around update_device_column."""
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
data = request.get_json() or {}
alias = data.get('alias')
if not alias:
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "alias is required"}), 400
return update_device_column(mac, 'devName', alias)
@app.route('/mcp/sse/device/open_ports', methods=['POST'])
@app.route('/device/open_ports', methods=['POST'])
def api_device_open_ports():
"""Get stored NMAP open ports for a target IP or MAC."""
if not is_authorized():
return jsonify({"success": False, "error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
target = data.get('target')
if not target:
return jsonify({"success": False, "error": "Target (IP or MAC) is required"}), 400
device_handler = DeviceInstance()
# Use DeviceInstance method to get stored open ports
open_ports = device_handler.getOpenPorts(target)
if not open_ports:
return jsonify({"success": False, "error": f"No stored open ports for {target}. Run a scan with `/nettools/trigger-scan`"}), 404
return jsonify({"success": True, "target": target, "open_ports": open_ports})
# --------------------------
# Devices Collections
# --------------------------
@app.route("/devices", methods=["GET"])
def api_get_devices():
if not is_authorized():
@@ -289,6 +392,7 @@ def api_devices_totals():
return devices_totals()
@app.route('/mcp/sse/devices/by-status', methods=['GET', 'POST'])
@app.route("/devices/by-status", methods=["GET"])
def api_devices_by_status():
if not is_authorized():
@@ -299,15 +403,88 @@ def api_devices_by_status():
return devices_by_status(status)
@app.route('/mcp/sse/devices/search', methods=['POST'])
@app.route('/devices/search', methods=['POST'])
def api_devices_search():
"""Device search: accepts 'query' in JSON and maps to device info/search."""
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json(silent=True) or {}
query = data.get('query')
if not query:
return jsonify({"error": "Missing 'query' parameter"}), 400
if is_mac(query):
device_data = get_device_data(query)
if device_data.status_code == 200:
return jsonify({"success": True, "devices": [device_data.get_json()]})
else:
return jsonify({"success": False, "error": "Device not found"}), 404
# Create fresh DB instance for this thread
device_handler = DeviceInstance()
matches = device_handler.search(query)
if not matches:
return jsonify({"success": False, "error": "No devices found"}), 404
return jsonify({"success": True, "devices": matches})
@app.route('/mcp/sse/devices/latest', methods=['GET'])
@app.route('/devices/latest', methods=['GET'])
def api_devices_latest():
"""Get latest device (most recent) - maps to DeviceInstance.getLatest()."""
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
device_handler = DeviceInstance()
latest = device_handler.getLatest()
if not latest:
return jsonify({"message": "No devices found"}), 404
return jsonify([latest])
@app.route('/mcp/sse/devices/network/topology', methods=['GET'])
@app.route('/devices/network/topology', methods=['GET'])
def api_devices_network_topology():
"""Network topology mapping."""
if not is_authorized():
return jsonify({"error": "Unauthorized"}), 401
device_handler = DeviceInstance()
result = device_handler.getNetworkTopology()
return jsonify(result)
# --------------------------
# Net tools
# --------------------------
@app.route('/mcp/sse/nettools/wakeonlan', methods=['POST'])
@app.route("/nettools/wakeonlan", methods=["POST"])
def api_wakeonlan():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
mac = request.json.get("devMac")
data = request.json or {}
mac = data.get("devMac")
ip = data.get("devLastIP") or data.get('ip')
if not mac and ip:
device_handler = DeviceInstance()
dev = device_handler.getByIP(ip)
if not dev or not dev.get('devMac'):
return jsonify({"success": False, "message": "ERROR: Device not found", "error": "MAC not resolved"}), 404
mac = dev.get('devMac')
return wakeonlan(mac)
@@ -368,11 +545,42 @@ def api_internet_info():
return internet_info()
@app.route('/mcp/sse/nettools/trigger-scan', methods=['POST'])
@app.route("/nettools/trigger-scan", methods=["GET"])
def api_trigger_scan():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
data = request.get_json(silent=True) or {}
scan_type = data.get('type', 'ARPSCAN')
# Validate scan type
loaded_plugins = get_setting_value('LOADED_PLUGINS')
if scan_type not in loaded_plugins:
return jsonify({"success": False, "error": f"Invalid scan type. Must be one of: {', '.join(loaded_plugins)}"}), 400
queue = UserEventsQueueInstance()
action = f"run|{scan_type}"
queue.add_event(action)
return jsonify({"success": True, "message": f"Scan triggered for type: {scan_type}"}), 200
# --------------------------
# MCP Server
# --------------------------
@app.route('/mcp/sse/openapi.json', methods=['GET'])
def api_openapi_spec():
if not is_authorized():
return jsonify({"success": False, "error": "Unauthorized"}), 401
return openapi_spec()
# --------------------------
# DB query
# --------------------------
@app.route("/dbquery/read", methods=["POST"])
def dbquery_read():
if not is_authorized():
@@ -395,6 +603,7 @@ def dbquery_write():
data = request.get_json() or {}
raw_sql_b64 = data.get("rawSql")
if not raw_sql_b64:
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "rawSql is required"}), 400
return write_query(raw_sql_b64)
@@ -460,11 +669,13 @@ def api_delete_online_history():
@app.route("/logs", methods=["DELETE"])
def api_clean_log():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
file = request.args.get("file")
if not file:
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "Missing 'file' query parameter"}), 400
return clean_log(file)
@@ -499,8 +710,6 @@ def api_add_to_execution_queue():
# --------------------------
# Device Events
# --------------------------
@app.route("/events/create/<mac>", methods=["POST"])
def api_create_event(mac):
if not is_authorized():
@@ -564,6 +773,44 @@ def api_get_events_totals():
return get_events_totals(period)
@app.route('/mcp/sse/events/recent', methods=['GET', 'POST'])
@app.route('/events/recent', methods=['GET'])
def api_events_default_24h():
return api_events_recent(24) # Reuse handler
@app.route('/mcp/sse/events/last', methods=['GET', 'POST'])
@app.route('/events/last', methods=['GET'])
def get_last_events():
if not is_authorized():
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
# Create fresh DB instance for this thread
event_handler = EventInstance()
return event_handler.get_last_n(10)
@app.route('/events/<int:hours>', methods=['GET'])
def api_events_recent(hours):
"""Return events from the last <hours> hours using EventInstance."""
if not is_authorized():
return jsonify({"success": False, "error": "Unauthorized"}), 401
# Validate hours input
if hours <= 0:
return jsonify({"success": False, "error": "Hours must be > 0"}), 400
try:
# Create fresh DB instance for this thread
event_handler = EventInstance()
events = event_handler.get_by_hours(hours)
return jsonify({"success": True, "hours": hours, "count": len(events), "events": events}), 200
except Exception as ex:
return jsonify({"success": False, "error": str(ex)}), 500
# --------------------------
# Sessions
# --------------------------
@@ -793,3 +1040,9 @@ def start_server(graphql_port, app_state):
# Update the state to indicate the server has started
app_state = updateState("Process: Idle", None, None, None, 1)
if __name__ == "__main__":
# This block is for running the server directly for testing purposes
# In production, start_server is called from api.py
pass

View File

@@ -228,7 +228,8 @@ def devices_totals():
def devices_by_status(status=None):
"""
Return devices filtered by status.
Return devices filtered by status. Returns all if no status provided.
Possible statuses: my, connected, favorites, new, down, archived
"""
conn = get_temp_db_connection()

View File

@@ -0,0 +1,204 @@
#!/usr/bin/env python
import threading
from flask import Blueprint, request, jsonify, Response, stream_with_context
from helper import get_setting_value
from helper import mylog
# from .events_endpoint import get_events # will import locally where needed
import requests
import json
import uuid
import queue
# Blueprints
mcp_bp = Blueprint('mcp', __name__)
tools_bp = Blueprint('tools', __name__)
mcp_sessions = {}
mcp_sessions_lock = threading.Lock()
def check_auth():
token = request.headers.get("Authorization")
expected_token = f"Bearer {get_setting_value('API_TOKEN')}"
return token == expected_token
# --------------------------
# Specs
# --------------------------
def openapi_spec():
# Spec matching actual available routes for MCP tools
mylog("verbose", ["[MCP] OpenAPI spec requested"])
spec = {
"openapi": "3.0.0",
"info": {"title": "NetAlertX Tools", "version": "1.1.0"},
"servers": [{"url": "/"}],
"paths": {
"/devices/by-status": {"post": {"operationId": "list_devices"}},
"/device/{mac}": {"post": {"operationId": "get_device_info"}},
"/devices/search": {"post": {"operationId": "search_devices"}},
"/devices/latest": {"get": {"operationId": "get_latest_device"}},
"/nettools/trigger-scan": {"post": {"operationId": "trigger_scan"}},
"/device/open_ports": {"post": {"operationId": "get_open_ports"}},
"/devices/network/topology": {"get": {"operationId": "get_network_topology"}},
"/events/recent": {"get": {"operationId": "get_recent_alerts"}, "post": {"operationId": "get_recent_alerts"}},
"/events/last": {"get": {"operationId": "get_last_events"}, "post": {"operationId": "get_last_events"}},
"/device/{mac}/set-alias": {"post": {"operationId": "set_device_alias"}},
"/nettools/wakeonlan": {"post": {"operationId": "wol_wake_device"}}
}
}
return jsonify(spec)
# --------------------------
# MCP SSE/JSON-RPC Endpoint
# --------------------------
# Sessions for SSE
_openapi_spec_cache = None
API_BASE_URL = f"http://localhost:{get_setting_value('GRAPHQL_PORT')}"
def get_openapi_spec():
global _openapi_spec_cache
if _openapi_spec_cache:
return _openapi_spec_cache
try:
r = requests.get(f"{API_BASE_URL}/mcp/openapi.json", timeout=10)
r.raise_for_status()
_openapi_spec_cache = r.json()
return _openapi_spec_cache
except Exception:
return None
def map_openapi_to_mcp_tools(spec):
tools = []
if not spec or 'paths' not in spec:
return tools
for path, methods in spec['paths'].items():
for method, details in methods.items():
if 'operationId' in details:
tool = {'name': details['operationId'], 'description': details.get('description', ''), 'inputSchema': {'type': 'object', 'properties': {}, 'required': []}}
if 'requestBody' in details:
content = details['requestBody'].get('content', {})
if 'application/json' in content:
schema = content['application/json'].get('schema', {})
tool['inputSchema'] = schema.copy()
if 'parameters' in details:
for param in details['parameters']:
if param.get('in') == 'query':
tool['inputSchema']['properties'][param['name']] = {'type': param.get('schema', {}).get('type', 'string'), 'description': param.get('description', '')}
if param.get('required'):
tool['inputSchema']['required'].append(param['name'])
tools.append(tool)
return tools
def process_mcp_request(data):
method = data.get('method')
msg_id = data.get('id')
if method == 'initialize':
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'protocolVersion': '2024-11-05', 'capabilities': {'tools': {}}, 'serverInfo': {'name': 'NetAlertX', 'version': '1.0.0'}}}
if method == 'notifications/initialized':
return None
if method == 'tools/list':
spec = get_openapi_spec()
tools = map_openapi_to_mcp_tools(spec)
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'tools': tools}}
if method == 'tools/call':
params = data.get('params', {})
tool_name = params.get('name')
tool_args = params.get('arguments', {})
spec = get_openapi_spec()
target_path = None
target_method = None
if spec and 'paths' in spec:
for path, methods in spec['paths'].items():
for m, details in methods.items():
if details.get('operationId') == tool_name:
target_path = path
target_method = m.upper()
break
if target_path:
break
if not target_path:
return {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': f"Tool {tool_name} not found"}}
try:
headers = {'Content-Type': 'application/json'}
if 'Authorization' in request.headers:
headers['Authorization'] = request.headers['Authorization']
url = f"{API_BASE_URL}{target_path}"
if target_method == 'POST':
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
else:
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
content = []
try:
json_content = api_res.json()
content.append({'type': 'text', 'text': json.dumps(json_content, indent=2)})
except Exception:
content.append({'type': 'text', 'text': api_res.text})
is_error = api_res.status_code >= 400
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': content, 'isError': is_error}}
except Exception as e:
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': [{'type': 'text', 'text': f"Error calling tool: {str(e)}"}], 'isError': True}}
if method == 'ping':
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {}}
if msg_id:
return {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': 'Method not found'}}
def mcp_messages():
session_id = request.args.get('session_id')
if not session_id:
return jsonify({"error": "Missing session_id"}), 400
with mcp_sessions_lock:
if session_id not in mcp_sessions:
return jsonify({"error": "Session not found"}), 404
q = mcp_sessions[session_id]
data = request.json
if not data:
return jsonify({"error": "Invalid JSON"}), 400
response = process_mcp_request(data)
if response:
q.put(response)
return jsonify({"status": "accepted"}), 202
def mcp_sse():
if request.method == 'POST':
try:
data = request.get_json(silent=True)
if data and 'method' in data and 'jsonrpc' in data:
response = process_mcp_request(data)
if response:
return jsonify(response)
else:
return '', 202
except Exception as e:
mylog("none", f'SSE POST processing error: {e}')
return jsonify({'status': 'ok', 'message': 'MCP SSE endpoint active'}), 200
session_id = uuid.uuid4().hex
q = queue.Queue()
with mcp_sessions_lock:
mcp_sessions[session_id] = q
def stream():
yield f"event: endpoint\ndata: /mcp/messages?session_id={session_id}\n\n"
try:
while True:
try:
message = q.get(timeout=20)
yield f"event: message\ndata: {json.dumps(message)}\n\n"
except queue.Empty:
yield ": keep-alive\n\n"
except GeneratorExit:
with mcp_sessions_lock:
if session_id in mcp_sessions:
del mcp_sessions[session_id]
return Response(stream_with_context(stream()), mimetype='text/event-stream')

View File

@@ -0,0 +1,304 @@
"""MCP bridge routes exposing NetAlertX tool endpoints via JSON-RPC."""
import json
import uuid
import queue
import requests
import threading
import logging
from flask import Blueprint, request, Response, stream_with_context, jsonify
from helper import get_setting_value
mcp_bp = Blueprint('mcp', __name__)
# Store active sessions: session_id -> Queue
sessions = {}
sessions_lock = threading.Lock()
# Cache for OpenAPI spec to avoid fetching on every request
openapi_spec_cache = None
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
API_BASE_URL = f"http://localhost:{BACKEND_PORT}/api/tools"
def get_openapi_spec():
"""Fetch and cache the tools OpenAPI specification from the local API server."""
global openapi_spec_cache
if openapi_spec_cache:
return openapi_spec_cache
try:
# Fetch from local server
# We use localhost because this code runs on the server
response = requests.get(f"{API_BASE_URL}/openapi.json", timeout=10)
response.raise_for_status()
openapi_spec_cache = response.json()
return openapi_spec_cache
except Exception as e:
print(f"Error fetching OpenAPI spec: {e}")
return None
def map_openapi_to_mcp_tools(spec):
"""Convert OpenAPI paths into MCP tool descriptors."""
tools = []
if not spec or "paths" not in spec:
return tools
for path, methods in spec["paths"].items():
for method, details in methods.items():
if "operationId" in details:
tool = {
"name": details["operationId"],
"description": details.get("description", details.get("summary", "")),
"inputSchema": {
"type": "object",
"properties": {},
"required": []
}
}
# Extract parameters from requestBody if present
if "requestBody" in details:
content = details["requestBody"].get("content", {})
if "application/json" in content:
schema = content["application/json"].get("schema", {})
tool["inputSchema"] = schema.copy()
if "properties" not in tool["inputSchema"]:
tool["inputSchema"]["properties"] = {}
if "required" not in tool["inputSchema"]:
tool["inputSchema"]["required"] = []
# Extract parameters from 'parameters' list (query/path params) - simplistic support
if "parameters" in details:
for param in details["parameters"]:
if param.get("in") == "query":
tool["inputSchema"]["properties"][param["name"]] = {
"type": param.get("schema", {}).get("type", "string"),
"description": param.get("description", "")
}
if param.get("required"):
if "required" not in tool["inputSchema"]:
tool["inputSchema"]["required"] = []
tool["inputSchema"]["required"].append(param["name"])
tools.append(tool)
return tools
def process_mcp_request(data):
"""Handle incoming MCP JSON-RPC requests and route them to tools."""
method = data.get("method")
msg_id = data.get("id")
response = None
if method == "initialize":
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {}
},
"serverInfo": {
"name": "NetAlertX",
"version": "1.0.0"
}
}
}
elif method == "notifications/initialized":
# No response needed for notification
pass
elif method == "tools/list":
spec = get_openapi_spec()
tools = map_openapi_to_mcp_tools(spec)
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"tools": tools
}
}
elif method == "tools/call":
params = data.get("params", {})
tool_name = params.get("name")
tool_args = params.get("arguments", {})
# Find the endpoint for this tool
spec = get_openapi_spec()
target_path = None
target_method = None
if spec and "paths" in spec:
for path, methods in spec["paths"].items():
for m, details in methods.items():
if details.get("operationId") == tool_name:
target_path = path
target_method = m.upper()
break
if target_path:
break
if target_path:
try:
# Make the request to the local API
# We forward the Authorization header from the incoming request if present
headers = {
"Content-Type": "application/json"
}
if "Authorization" in request.headers:
headers["Authorization"] = request.headers["Authorization"]
url = f"{API_BASE_URL}{target_path}"
if target_method == "POST":
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
elif target_method == "GET":
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
else:
api_res = None
if api_res:
content = []
try:
json_content = api_res.json()
content.append({
"type": "text",
"text": json.dumps(json_content, indent=2)
})
except (ValueError, json.JSONDecodeError):
content.append({
"type": "text",
"text": api_res.text
})
is_error = api_res.status_code >= 400
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"content": content,
"isError": is_error
}
}
else:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": f"Method {target_method} not supported"}
}
except Exception as e:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {
"content": [{"type": "text", "text": f"Error calling tool: {str(e)}"}],
"isError": True
}
}
else:
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": f"Tool {tool_name} not found"}
}
elif method == "ping":
response = {
"jsonrpc": "2.0",
"id": msg_id,
"result": {}
}
else:
# Unknown method
if msg_id: # Only respond if it's a request (has id)
response = {
"jsonrpc": "2.0",
"id": msg_id,
"error": {"code": -32601, "message": "Method not found"}
}
return response
@mcp_bp.route('/sse', methods=['GET', 'POST'])
def handle_sse():
"""Expose an SSE endpoint that streams MCP responses to connected clients."""
if request.method == 'POST':
# Handle verification or keep-alive pings
try:
data = request.get_json(silent=True)
if data and "method" in data and "jsonrpc" in data:
response = process_mcp_request(data)
if response:
return jsonify(response)
else:
# Notification or no response needed
return "", 202
except Exception as e:
# Log but don't fail - malformed requests shouldn't crash the endpoint
logging.getLogger(__name__).debug(f"SSE POST processing error: {e}")
return jsonify({"status": "ok", "message": "MCP SSE endpoint active"}), 200
session_id = uuid.uuid4().hex
q = queue.Queue()
with sessions_lock:
sessions[session_id] = q
def stream():
"""Yield SSE messages for queued MCP responses until the client disconnects."""
# Send the endpoint event
# The client should POST to /api/mcp/messages?session_id=<session_id>
yield f"event: endpoint\ndata: /api/mcp/messages?session_id={session_id}\n\n"
try:
while True:
try:
# Wait for messages
message = q.get(timeout=20) # Keep-alive timeout
yield f"event: message\ndata: {json.dumps(message)}\n\n"
except queue.Empty:
# Send keep-alive comment
yield ": keep-alive\n\n"
except GeneratorExit:
with sessions_lock:
if session_id in sessions:
del sessions[session_id]
return Response(stream_with_context(stream()), mimetype='text/event-stream')
@mcp_bp.route('/messages', methods=['POST'])
def handle_messages():
"""Receive MCP JSON-RPC messages and enqueue responses for an SSE session."""
session_id = request.args.get('session_id')
if not session_id:
return jsonify({"error": "Missing session_id"}), 400
with sessions_lock:
if session_id not in sessions:
return jsonify({"error": "Session not found"}), 404
q = sessions[session_id]
data = request.json
if not data:
return jsonify({"error": "Invalid JSON"}), 400
response = process_mcp_request(data)
if response:
q.put(response)
return jsonify({"status": "accepted"}), 202

View File

@@ -1,83 +1,134 @@
from front.plugins.plugin_helper import is_mac
from logger import mylog
from models.plugin_object_instance import PluginObjectInstance
from database import get_temp_db_connection
# -------------------------------------------------------------------------------
# Device object handling (WIP)
# -------------------------------------------------------------------------------
class DeviceInstance:
def __init__(self, db):
self.db = db
# Get all
# --- helpers --------------------------------------------------------------
def _fetchall(self, query, params=()):
conn = get_temp_db_connection()
rows = conn.execute(query, params).fetchall()
conn.close()
return [dict(r) for r in rows]
def _fetchone(self, query, params=()):
conn = get_temp_db_connection()
row = conn.execute(query, params).fetchone()
conn.close()
return dict(row) if row else None
def _execute(self, query, params=()):
conn = get_temp_db_connection()
cur = conn.cursor()
cur.execute(query, params)
conn.commit()
conn.close()
# --- public API -----------------------------------------------------------
def getAll(self):
self.db.sql.execute("""
SELECT * FROM Devices
""")
return self.db.sql.fetchall()
return self._fetchall("SELECT * FROM Devices")
# Get all with unknown names
def getUnknown(self):
self.db.sql.execute("""
SELECT * FROM Devices WHERE devName in ("(unknown)", "(name not found)", "" )
return self._fetchall("""
SELECT * FROM Devices
WHERE devName IN ("(unknown)", "(name not found)", "")
""")
return self.db.sql.fetchall()
# Get specific column value based on devMac
def getValueWithMac(self, column_name, devMac):
query = f"SELECT {column_name} FROM Devices WHERE devMac = ?"
self.db.sql.execute(query, (devMac,))
result = self.db.sql.fetchone()
return result[column_name] if result else None
row = self._fetchone(f"""
SELECT {column_name} FROM Devices WHERE devMac = ?
""", (devMac,))
return row.get(column_name) if row else None
# Get all down
def getDown(self):
self.db.sql.execute("""
SELECT * FROM Devices WHERE devAlertDown = 1 and devPresentLastScan = 0
return self._fetchall("""
SELECT * FROM Devices
WHERE devAlertDown = 1 AND devPresentLastScan = 0
""")
return self.db.sql.fetchall()
# Get all down
def getOffline(self):
self.db.sql.execute("""
SELECT * FROM Devices WHERE devPresentLastScan = 0
return self._fetchall("""
SELECT * FROM Devices
WHERE devPresentLastScan = 0
""")
return self.db.sql.fetchall()
# Get a device by devGUID
def getByGUID(self, devGUID):
self.db.sql.execute("SELECT * FROM Devices WHERE devGUID = ?", (devGUID,))
result = self.db.sql.fetchone()
return dict(result) if result else None
return self._fetchone("""
SELECT * FROM Devices WHERE devGUID = ?
""", (devGUID,))
# Check if a device exists by devGUID
def exists(self, devGUID):
self.db.sql.execute(
"SELECT COUNT(*) AS count FROM Devices WHERE devGUID = ?", (devGUID,)
)
result = self.db.sql.fetchone()
return result["count"] > 0
row = self._fetchone("""
SELECT COUNT(*) as count FROM Devices WHERE devGUID = ?
""", (devGUID,))
return row['count'] > 0 if row else False
def getByIP(self, ip):
return self._fetchone("""
SELECT * FROM Devices WHERE devLastIP = ?
""", (ip,))
def search(self, query):
like = f"%{query}%"
return self._fetchall("""
SELECT * FROM Devices
WHERE devMac LIKE ? OR devName LIKE ? OR devLastIP LIKE ?
""", (like, like, like))
def getLatest(self):
return self._fetchone("""
SELECT * FROM Devices
ORDER BY devFirstConnection DESC LIMIT 1
""")
def getNetworkTopology(self):
rows = self._fetchall("""
SELECT devName, devMac, devParentMAC, devParentPort, devVendor FROM Devices
""")
nodes = [{"id": r["devMac"], "name": r["devName"], "vendor": r["devVendor"]} for r in rows]
links = [{"source": r["devParentMAC"], "target": r["devMac"], "port": r["devParentPort"]}
for r in rows if r["devParentMAC"]]
return {"nodes": nodes, "links": links}
# Update a specific field for a device
def updateField(self, devGUID, field, value):
if not self.exists(devGUID):
m = f"[Device] In 'updateField': GUID {devGUID} not found."
mylog("none", m)
raise ValueError(m)
msg = f"[Device] updateField: GUID {devGUID} not found"
mylog("none", msg)
raise ValueError(msg)
self._execute(f"UPDATE Devices SET {field}=? WHERE devGUID=?", (value, devGUID))
self.db.sql.execute(
f"""
UPDATE Devices SET {field} = ? WHERE devGUID = ?
""",
(value, devGUID),
)
self.db.commitDB()
# Delete a device by devGUID
def delete(self, devGUID):
if not self.exists(devGUID):
m = f"[Device] In 'delete': GUID {devGUID} not found."
mylog("none", m)
raise ValueError(m)
msg = f"[Device] delete: GUID {devGUID} not found"
mylog("none", msg)
raise ValueError(msg)
self._execute("DELETE FROM Devices WHERE devGUID=?", (devGUID,))
self.db.sql.execute("DELETE FROM Devices WHERE devGUID = ?", (devGUID,))
self.db.commitDB()
def resolvePrimaryID(self, target):
if is_mac(target):
return target.lower()
dev = self.getByIP(target)
return dev['devMac'].lower() if dev else None
def getOpenPorts(self, target):
primary = self.resolvePrimaryID(target)
if not primary:
return []
objs = PluginObjectInstance().getByField(
plugPrefix='NMAP',
matchedColumn='Object_PrimaryID',
matchedKey=primary,
returnFields=['Object_SecondaryID', 'Watched_Value2']
)
ports = []
for o in objs:
port = int(o.get('Object_SecondaryID') or 0)
ports.append({"port": port, "service": o.get('Watched_Value2', '')})
return ports

View File

@@ -0,0 +1,107 @@
from datetime import datetime, timedelta
from logger import mylog
from database import get_temp_db_connection
# -------------------------------------------------------------------------------
# Event handling (Matches table: Events)
# -------------------------------------------------------------------------------
class EventInstance:
def _conn(self):
"""Always return a new DB connection (thread-safe)."""
return get_temp_db_connection()
def _rows_to_list(self, rows):
return [dict(r) for r in rows]
# Get all events
def get_all(self):
conn = self._conn()
rows = conn.execute(
"SELECT * FROM Events ORDER BY eve_DateTime DESC"
).fetchall()
conn.close()
return self._rows_to_list(rows)
# --- Get last n events ---
def get_last_n(self, n=10):
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
ORDER BY eve_DateTime DESC
LIMIT ?
""", (n,)).fetchall()
conn.close()
return self._rows_to_list(rows)
# --- Specific helper for last 10 ---
def get_last(self):
return self.get_last_n(10)
# Get events in the last 24h
def get_recent(self):
since = datetime.now() - timedelta(hours=24)
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
WHERE eve_DateTime >= ?
ORDER BY eve_DateTime DESC
""", (since,)).fetchall()
conn.close()
return self._rows_to_list(rows)
# Get events from last N hours
def get_by_hours(self, hours: int):
if hours <= 0:
mylog("warn", f"[Events] get_by_hours({hours}) -> invalid value")
return []
since = datetime.now() - timedelta(hours=hours)
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
WHERE eve_DateTime >= ?
ORDER BY eve_DateTime DESC
""", (since,)).fetchall()
conn.close()
return self._rows_to_list(rows)
# Get events in a date range
def get_by_range(self, start: datetime, end: datetime):
if end < start:
mylog("error", f"[Events] get_by_range invalid: {start} > {end}")
raise ValueError("Start must not be after end")
conn = self._conn()
rows = conn.execute("""
SELECT * FROM Events
WHERE eve_DateTime BETWEEN ? AND ?
ORDER BY eve_DateTime DESC
""", (start, end)).fetchall()
conn.close()
return self._rows_to_list(rows)
# Insert new event
def add(self, mac, ip, eventType, info="", pendingAlert=True, pairRow=None):
conn = self._conn()
conn.execute("""
INSERT INTO Events (
eve_MAC, eve_IP, eve_DateTime,
eve_EventType, eve_AdditionalInfo,
eve_PendingAlertEmail, eve_PairEventRowid
) VALUES (?,?,?,?,?,?,?)
""", (mac, ip, datetime.now(), eventType, info,
1 if pendingAlert else 0, pairRow))
conn.commit()
conn.close()
# Delete old events
def delete_older_than(self, days: int):
cutoff = datetime.now() - timedelta(days=days)
conn = self._conn()
result = conn.execute("DELETE FROM Events WHERE eve_DateTime < ?", (cutoff,))
conn.commit()
deleted_count = result.rowcount
conn.close()
return deleted_count

View File

@@ -1,70 +1,91 @@
from logger import mylog
from database import get_temp_db_connection
# -------------------------------------------------------------------------------
# Plugin object handling (WIP)
# Plugin object handling (THREAD-SAFE REWRITE)
# -------------------------------------------------------------------------------
class PluginObjectInstance:
def __init__(self, db):
self.db = db
# Get all plugin objects
# -------------- Internal DB helper wrappers --------------------------------
def _fetchall(self, query, params=()):
conn = get_temp_db_connection()
rows = conn.execute(query, params).fetchall()
conn.close()
return [dict(r) for r in rows]
def _fetchone(self, query, params=()):
conn = get_temp_db_connection()
row = conn.execute(query, params).fetchone()
conn.close()
return dict(row) if row else None
def _execute(self, query, params=()):
conn = get_temp_db_connection()
conn.execute(query, params)
conn.commit()
conn.close()
# ---------------------------------------------------------------------------
# Public API — identical behaviour, now thread-safe + self-contained
# ---------------------------------------------------------------------------
def getAll(self):
self.db.sql.execute("""
SELECT * FROM Plugins_Objects
""")
return self.db.sql.fetchall()
return self._fetchall("SELECT * FROM Plugins_Objects")
# Get plugin object by ObjectGUID
def getByGUID(self, ObjectGUID):
self.db.sql.execute(
return self._fetchone(
"SELECT * FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,)
)
result = self.db.sql.fetchone()
return dict(result) if result else None
# Check if a plugin object exists by ObjectGUID
def exists(self, ObjectGUID):
self.db.sql.execute(
"SELECT COUNT(*) AS count FROM Plugins_Objects WHERE ObjectGUID = ?",
(ObjectGUID,),
)
result = self.db.sql.fetchone()
return result["count"] > 0
row = self._fetchone("""
SELECT COUNT(*) AS count FROM Plugins_Objects WHERE ObjectGUID = ?
""", (ObjectGUID,))
return row["count"] > 0 if row else False
# Get objects by plugin name
def getByPlugin(self, plugin):
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,))
return self.db.sql.fetchall()
return self._fetchall(
"SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,)
)
def getByField(self, plugPrefix, matchedColumn, matchedKey, returnFields=None):
rows = self._fetchall(
f"SELECT * FROM Plugins_Objects WHERE Plugin = ? AND {matchedColumn} = ?",
(plugPrefix, matchedKey.lower())
)
if not returnFields:
return rows
return [{f: row.get(f) for f in returnFields} for row in rows]
def getByPrimary(self, plugin, primary_id):
return self._fetchall("""
SELECT * FROM Plugins_Objects
WHERE Plugin = ? AND Object_PrimaryID = ?
""", (plugin, primary_id))
# Get objects by status
def getByStatus(self, status):
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Status = ?", (status,))
return self.db.sql.fetchall()
return self._fetchall("""
SELECT * FROM Plugins_Objects WHERE Status = ?
""", (status,))
# Update a specific field for a plugin object
def updateField(self, ObjectGUID, field, value):
if not self.exists(ObjectGUID):
m = f"[PluginObject] In 'updateField': GUID {ObjectGUID} not found."
mylog("none", m)
raise ValueError(m)
msg = f"[PluginObject] updateField: GUID {ObjectGUID} not found."
mylog("none", msg)
raise ValueError(msg)
self.db.sql.execute(
f"""
UPDATE Plugins_Objects SET {field} = ? WHERE ObjectGUID = ?
""",
(value, ObjectGUID),
self._execute(
f"UPDATE Plugins_Objects SET {field}=? WHERE ObjectGUID=?",
(value, ObjectGUID)
)
self.db.commitDB()
# Delete a plugin object by ObjectGUID
def delete(self, ObjectGUID):
if not self.exists(ObjectGUID):
m = f"[PluginObject] In 'delete': GUID {ObjectGUID} not found."
mylog("none", m)
raise ValueError(m)
msg = f"[PluginObject] delete: GUID {ObjectGUID} not found."
mylog("none", msg)
raise ValueError(msg)
self.db.sql.execute(
"DELETE FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,)
)
self.db.commitDB()
self._execute("DELETE FROM Plugins_Objects WHERE ObjectGUID=?", (ObjectGUID,))

View File

@@ -650,7 +650,7 @@ def update_devices_names(pm):
sql = pm.db.sql
resolver = NameResolver(pm.db)
device_handler = DeviceInstance(pm.db)
device_handler = DeviceInstance()
nameNotFound = "(name not found)"

View File

@@ -42,13 +42,13 @@ class UpdateFieldAction(Action):
# currently unused
if isinstance(obj, dict) and "ObjectGUID" in obj:
mylog("debug", f"[WF] Updating Object '{obj}' ")
plugin_instance = PluginObjectInstance(self.db)
plugin_instance = PluginObjectInstance()
plugin_instance.updateField(obj["ObjectGUID"], self.field, self.value)
processed = True
elif isinstance(obj, dict) and "devGUID" in obj:
mylog("debug", f"[WF] Updating Device '{obj}' ")
device_instance = DeviceInstance(self.db)
device_instance = DeviceInstance()
device_instance.updateField(obj["devGUID"], self.field, self.value)
processed = True
@@ -79,13 +79,13 @@ class DeleteObjectAction(Action):
# currently unused
if isinstance(obj, dict) and "ObjectGUID" in obj:
mylog("debug", f"[WF] Updating Object '{obj}' ")
plugin_instance = PluginObjectInstance(self.db)
plugin_instance = PluginObjectInstance()
plugin_instance.delete(obj["ObjectGUID"])
processed = True
elif isinstance(obj, dict) and "devGUID" in obj:
mylog("debug", f"[WF] Updating Device '{obj}' ")
device_instance = DeviceInstance(self.db)
device_instance = DeviceInstance()
device_instance.delete(obj["devGUID"])
processed = True

View File

@@ -0,0 +1,306 @@
import sys
import os
import pytest
from unittest.mock import patch, MagicMock
from datetime import datetime
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from helper import get_setting_value # noqa: E402
from api_server.api_server_start import app # noqa: E402
@pytest.fixture(scope="session")
def api_token():
return get_setting_value("API_TOKEN")
@pytest.fixture
def client():
with app.test_client() as client:
yield client
def auth_headers(token):
return {"Authorization": f"Bearer {token}"}
# --- Device Search Tests ---
@patch('models.device_instance.get_temp_db_connection')
def test_get_device_info_ip_partial(mock_db_conn, client, api_token):
"""Test device search with partial IP search."""
# Mock database connection - DeviceInstance._fetchall calls conn.execute().fetchall()
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchall.return_value = [
{"devName": "Test Device", "devMac": "AA:BB:CC:DD:EE:FF", "devLastIP": "192.168.1.50"}
]
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
payload = {"query": ".50"}
response = client.post('/devices/search',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert len(data["devices"]) == 1
assert data["devices"][0]["devLastIP"] == "192.168.1.50"
# --- Trigger Scan Tests ---
@patch('api_server.api_server_start.UserEventsQueueInstance')
def test_trigger_scan_ARPSCAN(mock_queue_class, client, api_token):
"""Test trigger_scan with ARPSCAN type."""
mock_queue = MagicMock()
mock_queue_class.return_value = mock_queue
payload = {"type": "ARPSCAN"}
response = client.post('/mcp/sse/nettools/trigger-scan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
mock_queue.add_event.assert_called_once()
call_args = mock_queue.add_event.call_args[0]
assert "run|ARPSCAN" in call_args[0]
@patch('api_server.api_server_start.UserEventsQueueInstance')
def test_trigger_scan_invalid_type(mock_queue_class, client, api_token):
"""Test trigger_scan with invalid scan type."""
mock_queue = MagicMock()
mock_queue_class.return_value = mock_queue
payload = {"type": "invalid_type", "target": "192.168.1.0/24"}
response = client.post('/mcp/sse/nettools/trigger-scan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 400
data = response.get_json()
assert data["success"] is False
# --- get_open_ports Tests ---
@patch('models.plugin_object_instance.get_temp_db_connection')
@patch('models.device_instance.get_temp_db_connection')
def test_get_open_ports_ip(mock_plugin_db_conn, mock_device_db_conn, client, api_token):
"""Test get_open_ports with an IP address."""
# Mock database connections for both device lookup and plugin objects
mock_conn = MagicMock()
mock_execute_result = MagicMock()
# Mock for PluginObjectInstance.getByField (returns port data)
mock_execute_result.fetchall.return_value = [
{"Object_SecondaryID": "22", "Watched_Value2": "ssh"},
{"Object_SecondaryID": "80", "Watched_Value2": "http"}
]
# Mock for DeviceInstance.getByIP (returns device with MAC)
mock_execute_result.fetchone.return_value = {"devMac": "AA:BB:CC:DD:EE:FF"}
mock_conn.execute.return_value = mock_execute_result
mock_plugin_db_conn.return_value = mock_conn
mock_device_db_conn.return_value = mock_conn
payload = {"target": "192.168.1.1"}
response = client.post('/device/open_ports',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert len(data["open_ports"]) == 2
assert data["open_ports"][0]["port"] == 22
assert data["open_ports"][1]["service"] == "http"
@patch('models.plugin_object_instance.get_temp_db_connection')
def test_get_open_ports_mac_resolve(mock_plugin_db_conn, client, api_token):
"""Test get_open_ports with a MAC address that resolves to an IP."""
# Mock database connection for MAC-based open ports query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchall.return_value = [
{"Object_SecondaryID": "80", "Watched_Value2": "http"}
]
mock_conn.execute.return_value = mock_execute_result
mock_plugin_db_conn.return_value = mock_conn
payload = {"target": "AA:BB:CC:DD:EE:FF"}
response = client.post('/device/open_ports',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert "target" in data
assert len(data["open_ports"]) == 1
assert data["open_ports"][0]["port"] == 80
# --- get_network_topology Tests ---
@patch('models.device_instance.get_temp_db_connection')
def test_get_network_topology(mock_db_conn, client, api_token):
"""Test get_network_topology."""
# Mock database connection for topology query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchall.return_value = [
{"devName": "Router", "devMac": "AA:AA:AA:AA:AA:AA", "devParentMAC": None, "devParentPort": None, "devVendor": "VendorA"},
{"devName": "Device1", "devMac": "BB:BB:BB:BB:BB:BB", "devParentMAC": "AA:AA:AA:AA:AA:AA", "devParentPort": "eth1", "devVendor": "VendorB"}
]
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
response = client.get('/devices/network/topology',
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert len(data["nodes"]) == 2
assert len(data["links"]) == 1
assert data["links"][0]["source"] == "AA:AA:AA:AA:AA:AA"
assert data["links"][0]["target"] == "BB:BB:BB:BB:BB:BB"
# --- get_recent_alerts Tests ---
@patch('models.event_instance.get_temp_db_connection')
def test_get_recent_alerts(mock_db_conn, client, api_token):
"""Test get_recent_alerts."""
# Mock database connection for events query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
now = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
mock_execute_result.fetchall.return_value = [
{"eve_DateTime": now, "eve_EventType": "New Device", "eve_MAC": "AA:BB:CC:DD:EE:FF"}
]
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
response = client.get('/events/recent',
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert data["hours"] == 24
# --- Device Alias Tests ---
@patch('api_server.api_server_start.update_device_column')
def test_set_device_alias(mock_update_col, client, api_token):
"""Test set_device_alias."""
mock_update_col.return_value = {"success": True, "message": "Device alias updated"}
payload = {"alias": "New Device Name"}
response = client.post('/device/AA:BB:CC:DD:EE:FF/set-alias',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
mock_update_col.assert_called_once_with("AA:BB:CC:DD:EE:FF", "devName", "New Device Name")
@patch('api_server.api_server_start.update_device_column')
def test_set_device_alias_not_found(mock_update_col, client, api_token):
"""Test set_device_alias when device is not found."""
mock_update_col.return_value = {"success": False, "error": "Device not found"}
payload = {"alias": "New Device Name"}
response = client.post('/device/FF:FF:FF:FF:FF:FF/set-alias',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is False
assert "Device not found" in data["error"]
# --- Wake-on-LAN Tests ---
@patch('api_server.api_server_start.wakeonlan')
def test_wol_wake_device(mock_wakeonlan, client, api_token):
"""Test wol_wake_device."""
mock_wakeonlan.return_value = {"success": True, "message": "WOL packet sent to AA:BB:CC:DD:EE:FF"}
payload = {"devMac": "AA:BB:CC:DD:EE:FF"}
response = client.post('/nettools/wakeonlan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert data["success"] is True
assert "AA:BB:CC:DD:EE:FF" in data["message"]
def test_wol_wake_device_invalid_mac(client, api_token):
"""Test wol_wake_device with invalid MAC."""
payload = {"devMac": "invalid-mac"}
response = client.post('/nettools/wakeonlan',
json=payload,
headers=auth_headers(api_token))
assert response.status_code == 400
data = response.get_json()
assert data["success"] is False
# --- OpenAPI Spec Tests ---
# --- Latest Device Tests ---
@patch('models.device_instance.get_temp_db_connection')
def test_get_latest_device(mock_db_conn, client, api_token):
"""Test get_latest_device endpoint."""
# Mock database connection for latest device query
mock_conn = MagicMock()
mock_execute_result = MagicMock()
mock_execute_result.fetchone.return_value = {
"devName": "Latest Device",
"devMac": "AA:BB:CC:DD:EE:FF",
"devLastIP": "192.168.1.100",
"devFirstConnection": "2025-12-07 10:30:00"
}
mock_conn.execute.return_value = mock_execute_result
mock_db_conn.return_value = mock_conn
response = client.get('/devices/latest',
headers=auth_headers(api_token))
assert response.status_code == 200
data = response.get_json()
assert len(data) == 1
assert data[0]["devName"] == "Latest Device"
assert data[0]["devMac"] == "AA:BB:CC:DD:EE:FF"
def test_openapi_spec(client, api_token):
"""Test openapi_spec endpoint contains MCP tool paths."""
response = client.get('/mcp/sse/openapi.json', headers=auth_headers(api_token))
assert response.status_code == 200
spec = response.get_json()
# Check for MCP tool endpoints in the spec with correct paths
assert "/nettools/trigger-scan" in spec["paths"]
assert "/device/open_ports" in spec["paths"]
assert "/devices/network/topology" in spec["paths"]
assert "/events/recent" in spec["paths"]
assert "/device/{mac}/set-alias" in spec["paths"]
assert "/nettools/wakeonlan" in spec["paths"]