mirror of
https://github.com/jokob-sk/NetAlertX.git
synced 2025-12-07 09:36:05 -08:00
Compare commits
14 Commits
c8f3a84b92
...
fix-pr-130
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1dee812ce6 | ||
|
|
5c44fd8fea | ||
|
|
bd691f01b1 | ||
|
|
624fd87ee7 | ||
|
|
5d1c63375b | ||
|
|
8c982cd476 | ||
|
|
36e5751221 | ||
|
|
dfd836527e | ||
|
|
8d5a663817 | ||
|
|
e64c490c8a | ||
|
|
531b66effe | ||
|
|
5e4ad10fe0 | ||
|
|
541b932b6d | ||
|
|
2bf3ff9f00 |
3
.github/copilot-instructions.md
vendored
3
.github/copilot-instructions.md
vendored
@@ -39,10 +39,11 @@ Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `
|
||||
## API/Endpoints quick map
|
||||
- Flask app: `server/api_server/api_server_start.py` exposes routes like `/device/<mac>`, `/devices`, `/devices/export/{csv,json}`, `/devices/import`, `/devices/totals`, `/devices/by-status`, plus `nettools`, `events`, `sessions`, `dbquery`, `metrics`, `sync`.
|
||||
- Authorization: all routes expect header `Authorization: Bearer <API_TOKEN>` via `get_setting_value('API_TOKEN')`.
|
||||
- All responses need to return `"success":<False:True>` and if `False` an "error" message needs to be returned, e.g. `{"success": False, "error": f"No stored open ports for Device"}`
|
||||
|
||||
## Conventions & helpers to reuse
|
||||
- Settings: add/modify via `ccd()` in `server/initialise.py` or per‑plugin manifest. Never hardcode ports or secrets; use `get_setting_value()`.
|
||||
- Logging: use `logger.mylog(level, [message])`; levels: none/minimal/verbose/debug/trace.
|
||||
- Logging: use `mylog(level, [message])`; levels: none/minimal/verbose/debug/trace. `none` is used for most important messages that should always appear, such as exceptions.
|
||||
- Time/MAC/strings: `helper.py` (`timeNowDB`, `normalize_mac`, sanitizers). Validate MACs before DB writes.
|
||||
- DB helpers: prefer `server/db/db_helper.py` functions (e.g., `get_table_json`, device condition helpers) over raw SQL in new paths.
|
||||
|
||||
|
||||
20
Dockerfile
20
Dockerfile
@@ -26,13 +26,23 @@ ENV PATH="/opt/venv/bin:$PATH"
|
||||
|
||||
# Install build dependencies
|
||||
COPY requirements.txt /tmp/requirements.txt
|
||||
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \
|
||||
RUN apk add --no-cache \
|
||||
bash \
|
||||
shadow \
|
||||
python3 \
|
||||
python3-dev \
|
||||
gcc \
|
||||
musl-dev \
|
||||
libffi-dev \
|
||||
openssl-dev \
|
||||
git \
|
||||
rust \
|
||||
cargo \
|
||||
&& python -m venv /opt/venv
|
||||
|
||||
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
|
||||
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
|
||||
# together makes for a slightly smaller image size.
|
||||
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
|
||||
# Upgrade pip/wheel/setuptools and install Python packages
|
||||
RUN python -m pip install --upgrade pip setuptools wheel && \
|
||||
pip install --no-cache-dir -r /tmp/requirements.txt && \
|
||||
chmod -R u-rwx,g-rwx /opt
|
||||
|
||||
# second stage is the main runtime stage with just the minimum required to run the application
|
||||
|
||||
22
docs/API.md
22
docs/API.md
@@ -36,9 +36,15 @@ Authorization: Bearer <API_TOKEN>
|
||||
If the token is missing or invalid, the server will return:
|
||||
|
||||
```json
|
||||
{ "error": "Forbidden" }
|
||||
{
|
||||
"success": false,
|
||||
"message": "ERROR: Not authorized",
|
||||
"error": "Forbidden"
|
||||
}
|
||||
```
|
||||
|
||||
HTTP Status: **403 Forbidden**
|
||||
|
||||
---
|
||||
|
||||
## Base URL
|
||||
@@ -54,6 +60,8 @@ http://<server>:<GRAPHQL_PORT>/
|
||||
> [!TIP]
|
||||
> When retrieving devices or settings try using the GraphQL API endpoint first as it is read-optimized.
|
||||
|
||||
### Standard REST Endpoints
|
||||
|
||||
* [Device API Endpoints](API_DEVICE.md) – Manage individual devices
|
||||
* [Devices Collection](API_DEVICES.md) – Bulk operations on multiple devices
|
||||
* [Events](API_EVENTS.md) – Device event logging and management
|
||||
@@ -69,6 +77,18 @@ http://<server>:<GRAPHQL_PORT>/
|
||||
* [Logs](API_LOGS.md) – Purging of logs and adding to the event execution queue for user triggered events
|
||||
* [DB query](API_DBQUERY.md) (⚠ Internal) - Low level database access - use other endpoints if possible
|
||||
|
||||
### MCP Server Bridge
|
||||
|
||||
NetAlertX includes an **MCP (Model Context Protocol) Server Bridge** that provides AI assistants access to NetAlertX functionality through standardized tools. MCP endpoints are available at `/mcp/sse/*` paths and mirror the functionality of standard REST endpoints:
|
||||
|
||||
* `/mcp/sse` - Server-Sent Events endpoint for MCP client connections
|
||||
* `/mcp/sse/openapi.json` - OpenAPI specification for available MCP tools
|
||||
* `/mcp/sse/device/*`, `/mcp/sse/devices/*`, `/mcp/sse/nettools/*`, `/mcp/sse/events/*` - MCP-enabled versions of REST endpoints
|
||||
|
||||
MCP endpoints require the same Bearer token authentication as REST endpoints.
|
||||
|
||||
**📖 See [MCP Server Bridge API](API_MCP.md) for complete documentation, tool specifications, and integration examples.**
|
||||
|
||||
See [Testing](API_TESTS.md) for example requests and usage.
|
||||
|
||||
---
|
||||
|
||||
@@ -16,10 +16,14 @@ All `/dbquery/*` endpoints require an API token in the HTTP headers:
|
||||
Authorization: Bearer <API_TOKEN>
|
||||
```
|
||||
|
||||
If the token is missing or invalid:
|
||||
If the token is missing or invalid (HTTP 403):
|
||||
|
||||
```json
|
||||
{ "error": "Forbidden" }
|
||||
{
|
||||
\"success\": false,
|
||||
\"message\": \"ERROR: Not authorized\",
|
||||
\"error\": \"Forbidden\"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -41,6 +41,8 @@ Manage a **single device** by its MAC address. Operations include retrieval, upd
|
||||
* Device not found → HTTP 404
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
**MCP Integration**: Available as `get_device_info` and `set_device_alias` tools. See [MCP Server Bridge API](API_MCP.md).
|
||||
|
||||
---
|
||||
|
||||
## 2. Update Device Fields
|
||||
|
||||
@@ -207,6 +207,93 @@ The Devices Collection API provides operations to **retrieve, manage, import/exp
|
||||
|
||||
---
|
||||
|
||||
### 9. Search Devices
|
||||
|
||||
* **POST** `/devices/search`
|
||||
Search for devices by MAC, name, or IP address.
|
||||
|
||||
**Request Body** (JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"query": ".50"
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"devices": [
|
||||
{
|
||||
"devName": "Test Device",
|
||||
"devMac": "AA:BB:CC:DD:EE:FF",
|
||||
"devLastIP": "192.168.1.50"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 10. Get Latest Device
|
||||
|
||||
* **GET** `/devices/latest`
|
||||
Get the most recently connected device.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"devName": "Latest Device",
|
||||
"devMac": "AA:BB:CC:DD:EE:FF",
|
||||
"devLastIP": "192.168.1.100",
|
||||
"devFirstConnection": "2025-12-07 10:30:00"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 11. Get Network Topology
|
||||
|
||||
* **GET** `/devices/network/topology`
|
||||
Get network topology showing device relationships.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"nodes": [
|
||||
{
|
||||
"id": "AA:AA:AA:AA:AA:AA",
|
||||
"name": "Router",
|
||||
"vendor": "VendorA"
|
||||
}
|
||||
],
|
||||
"links": [
|
||||
{
|
||||
"source": "AA:AA:AA:AA:AA:AA",
|
||||
"target": "BB:BB:BB:BB:BB:BB",
|
||||
"port": "eth1"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MCP Tools
|
||||
|
||||
These endpoints are also available as **MCP Tools** for AI assistant integration:
|
||||
- `list_devices`, `search_devices`, `get_latest_device`, `get_network_topology`, `set_device_alias`
|
||||
|
||||
📖 See [MCP Server Bridge API](API_MCP.md) for AI integration details.
|
||||
|
||||
---
|
||||
|
||||
## Example `curl` Requests
|
||||
|
||||
**Get All Devices**:
|
||||
@@ -247,3 +334,26 @@ curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices/by-status?status=online"
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Search Devices**:
|
||||
|
||||
```sh
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/devices/search" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"query": "192.168.1"}'
|
||||
```
|
||||
|
||||
**Get Latest Device**:
|
||||
|
||||
```sh
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices/latest" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Get Network Topology**:
|
||||
|
||||
```sh
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices/network/topology" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
|
||||
@@ -88,7 +88,56 @@ The Events API provides access to **device event logs**, allowing creation, retr
|
||||
|
||||
---
|
||||
|
||||
### 4. Event Totals Over a Period
|
||||
### 4. Get Recent Events
|
||||
|
||||
* **GET** `/events/recent` → Get events from the last 24 hours
|
||||
* **GET** `/events/<hours>` → Get events from the last N hours
|
||||
|
||||
**Response** (JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"hours": 24,
|
||||
"count": 5,
|
||||
"events": [
|
||||
{
|
||||
"eve_DateTime": "2025-12-07 12:00:00",
|
||||
"eve_EventType": "New Device",
|
||||
"eve_MAC": "AA:BB:CC:DD:EE:FF",
|
||||
"eve_IP": "192.168.1.100",
|
||||
"eve_AdditionalInfo": "Device detected"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Get Latest Events
|
||||
|
||||
* **GET** `/events/last`
|
||||
Get the 10 most recent events.
|
||||
|
||||
**Response** (JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"count": 10,
|
||||
"events": [
|
||||
{
|
||||
"eve_DateTime": "2025-12-07 12:00:00",
|
||||
"eve_EventType": "Device Down",
|
||||
"eve_MAC": "AA:BB:CC:DD:EE:FF"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. Event Totals Over a Period
|
||||
|
||||
* **GET** `/sessions/totals?period=<period>`
|
||||
Return event and session totals over a given period.
|
||||
@@ -116,12 +165,25 @@ The Events API provides access to **device event logs**, allowing creation, retr
|
||||
|
||||
---
|
||||
|
||||
## MCP Tools
|
||||
|
||||
Event endpoints are available as **MCP Tools** for AI assistant integration:
|
||||
- `get_recent_alerts`, `get_last_events`
|
||||
|
||||
📖 See [MCP Server Bridge API](API_MCP.md) for AI integration details.
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
* All endpoints require **authorization** (Bearer token). Unauthorized requests return:
|
||||
* All endpoints require **authorization** (Bearer token). Unauthorized requests return HTTP 403:
|
||||
|
||||
```json
|
||||
{ "error": "Forbidden" }
|
||||
{
|
||||
"success": false,
|
||||
"message": "ERROR: Not authorized",
|
||||
"error": "Forbidden"
|
||||
}
|
||||
```
|
||||
|
||||
* Events are stored in the **Events table** with the following fields:
|
||||
|
||||
326
docs/API_MCP.md
Normal file
326
docs/API_MCP.md
Normal file
@@ -0,0 +1,326 @@
|
||||
# MCP Server Bridge API
|
||||
|
||||
The **MCP (Model Context Protocol) Server Bridge** provides AI assistants with standardized access to NetAlertX functionality through tools and server-sent events. This enables AI systems to interact with your network monitoring data in real-time.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The MCP Server Bridge exposes NetAlertX functionality as **MCP Tools** that AI assistants can call to:
|
||||
|
||||
- Search and retrieve device information
|
||||
- Trigger network scans
|
||||
- Get network topology and events
|
||||
- Wake devices via Wake-on-LAN
|
||||
- Access open port information
|
||||
- Set device aliases
|
||||
|
||||
All MCP endpoints mirror the functionality of standard REST endpoints but are optimized for AI assistant integration.
|
||||
|
||||
---
|
||||
|
||||
## Authentication
|
||||
|
||||
MCP endpoints use the same **Bearer token authentication** as REST endpoints:
|
||||
|
||||
```http
|
||||
Authorization: Bearer <API_TOKEN>
|
||||
```
|
||||
|
||||
Unauthorized requests return HTTP 403:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"message": "ERROR: Not authorized",
|
||||
"error": "Forbidden"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MCP Connection Endpoint
|
||||
|
||||
### Server-Sent Events (SSE)
|
||||
|
||||
* **GET/POST** `/mcp/sse`
|
||||
|
||||
Main MCP connection endpoint for AI clients. Establishes a persistent connection using Server-Sent Events for real-time communication between AI assistants and NetAlertX.
|
||||
|
||||
**Connection Example**:
|
||||
|
||||
```javascript
|
||||
const eventSource = new EventSource('/mcp/sse', {
|
||||
headers: {
|
||||
'Authorization': 'Bearer <API_TOKEN>'
|
||||
}
|
||||
});
|
||||
|
||||
eventSource.onmessage = function(event) {
|
||||
const response = JSON.parse(event.data);
|
||||
console.log('MCP Response:', response);
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## OpenAPI Specification
|
||||
|
||||
### Get MCP Tools Specification
|
||||
|
||||
* **GET** `/mcp/sse/openapi.json`
|
||||
|
||||
Returns the OpenAPI specification for all available MCP tools, describing the parameters and schemas for each tool.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"openapi": "3.0.0",
|
||||
"info": {
|
||||
"title": "NetAlertX Tools",
|
||||
"version": "1.1.0"
|
||||
},
|
||||
"servers": [{"url": "/"}],
|
||||
"paths": {
|
||||
"/devices/by-status": {
|
||||
"post": {"operationId": "list_devices"}
|
||||
},
|
||||
"/device/{mac}": {
|
||||
"post": {"operationId": "get_device_info"}
|
||||
},
|
||||
"/devices/search": {
|
||||
"post": {"operationId": "search_devices"}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Available MCP Tools
|
||||
|
||||
### Device Management Tools
|
||||
|
||||
| Tool | Endpoint | Description |
|
||||
|------|----------|-------------|
|
||||
| `list_devices` | `/mcp/sse/devices/by-status` | List devices by online status |
|
||||
| `get_device_info` | `/mcp/sse/device/<mac>` | Get detailed device information |
|
||||
| `search_devices` | `/mcp/sse/devices/search` | Search devices by MAC, name, or IP |
|
||||
| `get_latest_device` | `/mcp/sse/devices/latest` | Get most recently connected device |
|
||||
| `set_device_alias` | `/mcp/sse/device/<mac>/set-alias` | Set device friendly name |
|
||||
|
||||
### Network Tools
|
||||
|
||||
| Tool | Endpoint | Description |
|
||||
|------|----------|-------------|
|
||||
| `trigger_scan` | `/mcp/sse/nettools/trigger-scan` | Trigger network discovery scan |
|
||||
| `get_open_ports` | `/mcp/sse/device/open_ports` | Get stored NMAP open ports for device |
|
||||
| `wol_wake_device` | `/mcp/sse/nettools/wakeonlan` | Wake device using Wake-on-LAN |
|
||||
| `get_network_topology` | `/mcp/sse/devices/network/topology` | Get network topology map |
|
||||
|
||||
### Event & Monitoring Tools
|
||||
|
||||
| Tool | Endpoint | Description |
|
||||
|------|----------|-------------|
|
||||
| `get_recent_alerts` | `/mcp/sse/events/recent` | Get events from last 24 hours |
|
||||
| `get_last_events` | `/mcp/sse/events/last` | Get 10 most recent events |
|
||||
|
||||
---
|
||||
|
||||
## Tool Usage Examples
|
||||
|
||||
### Search Devices Tool
|
||||
|
||||
**Tool Call**:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "1",
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "search_devices",
|
||||
"arguments": {
|
||||
"query": "192.168.1"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "1",
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "{\n \"success\": true,\n \"devices\": [\n {\n \"devName\": \"Router\",\n \"devMac\": \"AA:BB:CC:DD:EE:FF\",\n \"devLastIP\": \"192.168.1.1\"\n }\n ]\n}"
|
||||
}
|
||||
],
|
||||
"isError": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Trigger Network Scan Tool
|
||||
|
||||
**Tool Call**:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "2",
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "trigger_scan",
|
||||
"arguments": {
|
||||
"type": "ARPSCAN"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "2",
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "{\n \"success\": true,\n \"message\": \"Scan triggered for type: ARPSCAN\"\n}"
|
||||
}
|
||||
],
|
||||
"isError": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Wake-on-LAN Tool
|
||||
|
||||
**Tool Call**:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "3",
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "wol_wake_device",
|
||||
"arguments": {
|
||||
"devMac": "AA:BB:CC:DD:EE:FF"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with AI Assistants
|
||||
|
||||
### Claude Desktop Integration
|
||||
|
||||
Add to your Claude Desktop `mcp.json` configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcp": {
|
||||
"servers": {
|
||||
"netalertx": {
|
||||
"command": "node",
|
||||
"args": ["/path/to/mcp-client.js"],
|
||||
"env": {
|
||||
"NETALERTX_URL": "http://your-server:<GRAPHQL_PORT>",
|
||||
"NETALERTX_TOKEN": "your-api-token"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Generic MCP Client
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from mcp import ClientSession, StdioServerParameters
|
||||
from mcp.client.stdio import stdio_client
|
||||
|
||||
async def main():
|
||||
# Connect to NetAlertX MCP server
|
||||
server_params = StdioServerParameters(
|
||||
command="curl",
|
||||
args=[
|
||||
"-N", "-H", "Authorization: Bearer <API_TOKEN>",
|
||||
"http://your-server:<GRAPHQL_PORT>/mcp/sse"
|
||||
]
|
||||
)
|
||||
|
||||
async with stdio_client(server_params) as (read, write):
|
||||
async with ClientSession(read, write) as session:
|
||||
# Initialize connection
|
||||
await session.initialize()
|
||||
|
||||
# List available tools
|
||||
tools = await session.list_tools()
|
||||
print(f"Available tools: {[t.name for t in tools.tools]}")
|
||||
|
||||
# Call a tool
|
||||
result = await session.call_tool("search_devices", {"query": "router"})
|
||||
print(f"Search result: {result}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
MCP tool calls return structured error information:
|
||||
|
||||
**Error Response**:
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "1",
|
||||
"result": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Error calling tool: Device not found"
|
||||
}
|
||||
],
|
||||
"isError": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Common Error Types**:
|
||||
- `401/403` - Authentication failure
|
||||
- `400` - Invalid parameters or missing required fields
|
||||
- `404` - Resource not found (device, scan results, etc.)
|
||||
- `500` - Internal server error
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
* MCP endpoints require the same API token authentication as REST endpoints
|
||||
* All MCP tools return JSON responses wrapped in MCP protocol format
|
||||
* Server-Sent Events maintain persistent connections for real-time updates
|
||||
* Tool parameters match their REST endpoint equivalents
|
||||
* Error responses include both HTTP status codes and descriptive messages
|
||||
* MCP bridge automatically handles request/response serialization
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
* [Main API Overview](API.md) - Core REST API documentation
|
||||
* [Device API](API_DEVICE.md) - Individual device management
|
||||
* [Devices Collection API](API_DEVICES.md) - Bulk device operations
|
||||
* [Network Tools API](API_NETTOOLS.md) - Wake-on-LAN, scans, network utilities
|
||||
* [Events API](API_EVENTS.md) - Event logging and monitoring
|
||||
@@ -241,3 +241,12 @@ curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/nmap" \
|
||||
curl "http://<server_ip>:<GRAPHQL_PORT>/nettools/internetinfo" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MCP Tools
|
||||
|
||||
Network tools are available as **MCP Tools** for AI assistant integration:
|
||||
- `wol_wake_device`, `trigger_scan`, `get_open_ports`
|
||||
|
||||
📖 See [MCP Server Bridge API](API_MCP.md) for AI integration details.
|
||||
|
||||
@@ -112,11 +112,3 @@ Slowness can be caused by:
|
||||
|
||||
> See [Performance Tips](./PERFORMANCE.md) for detailed optimization steps.
|
||||
|
||||
|
||||
#### IP flipping
|
||||
|
||||
With `ARPSCAN` scans some devices might flip IP addresses after each scan triggering false notifications. This is because some devices respond to broadcast calls and thus different IPs after scans are logged.
|
||||
|
||||
See how to prevent IP flipping in the [ARPSCAN plugin guide](/front/plugins/arp_scan/README.md).
|
||||
|
||||
Alternatively adjust your [notification settings](./NOTIFICATIONS.md) to prevent false positives by filtering out events or devices.
|
||||
|
||||
@@ -61,38 +61,20 @@ See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/
|
||||
|
||||
| Required | Path | Description |
|
||||
| :------------- | :------------- | :-------------|
|
||||
| ✅ | `:/data` | Folder which needs to contain a `/db` and `/config` sub-folders. |
|
||||
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is the same as on the server. |
|
||||
| ✅ | `:/data` | Folder which will contain the `/db/app.db`, `/config/app.conf` & `/config/devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
|
||||
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is teh same as on teh server. |
|
||||
| | `:/tmp/log` | Logs folder useful for debugging if you have issues setting up the container |
|
||||
| | `:/tmp/api` | The [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. Path configurable via `NETALERTX_API` environment variable. |
|
||||
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
|
||||
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
|
||||
|
||||
### Folder structure
|
||||
|
||||
Use separate `db` and `config` directories, do not nest them:
|
||||
|
||||
```
|
||||
data
|
||||
├── config
|
||||
└── db
|
||||
```
|
||||
|
||||
### Permissions
|
||||
|
||||
If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
|
||||
|
||||
```bash
|
||||
sudo chown -R 20211:20211 /local_data_dir
|
||||
sudo chmod -R a+rwx /local_data_dir
|
||||
```
|
||||
> Use separate `db` and `config` directories, do not nest them.
|
||||
|
||||
### Initial setup
|
||||
|
||||
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
|
||||
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/data/config/` folder directly
|
||||
|
||||
|
||||
#### Setting up scanners
|
||||
|
||||
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.
|
||||
|
||||
@@ -278,9 +278,8 @@ Run the container with the `--user "0"` parameter. Please note, some systems wil
|
||||
|
||||
```sh
|
||||
docker run -it --rm --name netalertx --user "0" \
|
||||
-v /local_data_dir/config:/app/config \
|
||||
-v /local_data_dir/db:/app/db \
|
||||
-v /local_data_dir:/data \
|
||||
-v /local_data_dir/config:/data/config \
|
||||
-v /local_data_dir/db:/data/db \
|
||||
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
```
|
||||
|
||||
@@ -1,29 +1,8 @@
|
||||
# Integration with PiHole
|
||||
|
||||
NetAlertX comes with 3 plugins suitable for integrating with your existing PiHole instance. The first plugin uses the v6 API, the second plugin is using a direct SQLite DB connection, the other leverages the `DHCP.leases` file generated by PiHole. You can combine multiple approaches and also supplement scans with other [plugins](/docs/PLUGINS.md).
|
||||
NetAlertX comes with 2 plugins suitable for integrating with your existing PiHole instance. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other [plugins](/docs/PLUGINS.md).
|
||||
|
||||
## Approach 1: `PIHOLEAPI` Plugin - Import devices directly from PiHole v6 API
|
||||
|
||||

|
||||
|
||||
To use this approach make sure the Web UI password in **Pi-hole** is set.
|
||||
|
||||
| Setting | Description | Recommended value |
|
||||
| :------------- | :------------- | :-------------|
|
||||
| `PIHOLEAPI_URL` | Your Pi-hole base URL including port. | `http://192.168.1.82:9880/` |
|
||||
| `PIHOLEAPI_RUN_SCHD` | If you run multiple device scanner plugins, align the schedules of all plugins to the same value. | `*/5 * * * *` |
|
||||
| `PIHOLEAPI_PASSWORD` | The Web UI base64 encoded (en-/decoding handled by the app) admin password. | `passw0rd` |
|
||||
| `PIHOLEAPI_SSL_VERIFY` | Whether to verify HTTPS certificates. Disable only for self-signed certificates. | `False` |
|
||||
| `PIHOLEAPI_API_MAXCLIENTS` | Maximum number of devices to request from Pi-hole. Defaults are usually fine. | `500` |
|
||||
| `PIHOLEAPI_FAKE_MAC` | Generate FAKE MAC from IP. | `False` |
|
||||
|
||||
Check the [PiHole API plugin readme](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_api_scan/) for details and troubleshooting.
|
||||
|
||||
### docker-compose changes
|
||||
|
||||
No changes needed
|
||||
|
||||
## Approach 2: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file
|
||||
## Approach 1: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file
|
||||
|
||||

|
||||
|
||||
@@ -44,7 +23,7 @@ Check the [DHCPLSS plugin readme](https://github.com/jokob-sk/NetAlertX/tree/mai
|
||||
| `:/etc/pihole/dhcp.leases` | PiHole's `dhcp.leases` file. Required if you want to use PiHole `dhcp.leases` file. This has to be matched with a corresponding `DHCPLSS_paths_to_check` setting entry (the path in the container must contain `pihole`) |
|
||||
|
||||
|
||||
## Approach 3: `PIHOLE` Plugin - Import devices directly from the PiHole database
|
||||
## Approach 2: `PIHOLE` Plugin - Import devices directly from the PiHole database
|
||||
|
||||

|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 117 KiB |
@@ -12,7 +12,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
|
||||
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
|
||||
from const import logPath # noqa: E402 [flake8 lint suppression]
|
||||
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
|
||||
from database import DB # noqa: E402 [flake8 lint suppression]
|
||||
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
|
||||
import conf # noqa: E402 [flake8 lint suppression]
|
||||
from pytz import timezone # noqa: E402 [flake8 lint suppression]
|
||||
@@ -98,9 +97,7 @@ def main():
|
||||
{"devMac": "00:11:22:33:44:57", "devLastIP": "192.168.1.82"},
|
||||
]
|
||||
else:
|
||||
db = DB()
|
||||
db.open()
|
||||
device_handler = DeviceInstance(db)
|
||||
device_handler = DeviceInstance()
|
||||
devices = (
|
||||
device_handler.getAll()
|
||||
if get_setting_value("REFRESH_FQDN")
|
||||
|
||||
@@ -11,7 +11,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
|
||||
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
|
||||
from const import logPath # noqa: E402 [flake8 lint suppression]
|
||||
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
|
||||
from database import DB # noqa: E402 [flake8 lint suppression]
|
||||
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
|
||||
import conf # noqa: E402 [flake8 lint suppression]
|
||||
from pytz import timezone # noqa: E402 [flake8 lint suppression]
|
||||
@@ -38,15 +37,11 @@ def main():
|
||||
|
||||
timeout = get_setting_value('DIGSCAN_RUN_TIMEOUT')
|
||||
|
||||
# Create a database connection
|
||||
db = DB() # instance of class DB
|
||||
db.open()
|
||||
|
||||
# Initialize the Plugin obj output file
|
||||
plugin_objects = Plugin_Objects(RESULT_FILE)
|
||||
|
||||
# Create a DeviceInstance instance
|
||||
device_handler = DeviceInstance(db)
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
# Retrieve devices
|
||||
if get_setting_value("REFRESH_FQDN"):
|
||||
|
||||
@@ -15,7 +15,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
|
||||
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
|
||||
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
|
||||
from const import logPath # noqa: E402 [flake8 lint suppression]
|
||||
from database import DB # noqa: E402 [flake8 lint suppression]
|
||||
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
|
||||
import conf # noqa: E402 [flake8 lint suppression]
|
||||
from pytz import timezone # noqa: E402 [flake8 lint suppression]
|
||||
@@ -41,15 +40,11 @@ def main():
|
||||
args = get_setting_value('ICMP_ARGS')
|
||||
in_regex = get_setting_value('ICMP_IN_REGEX')
|
||||
|
||||
# Create a database connection
|
||||
db = DB() # instance of class DB
|
||||
db.open()
|
||||
|
||||
# Initialize the Plugin obj output file
|
||||
plugin_objects = Plugin_Objects(RESULT_FILE)
|
||||
|
||||
# Create a DeviceInstance instance
|
||||
device_handler = DeviceInstance(db)
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
# Retrieve devices
|
||||
all_devices = device_handler.getAll()
|
||||
|
||||
@@ -12,7 +12,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
|
||||
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
|
||||
from const import logPath # noqa: E402 [flake8 lint suppression]
|
||||
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
|
||||
from database import DB # noqa: E402 [flake8 lint suppression]
|
||||
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
|
||||
import conf # noqa: E402 [flake8 lint suppression]
|
||||
from pytz import timezone # noqa: E402 [flake8 lint suppression]
|
||||
@@ -40,15 +39,11 @@ def main():
|
||||
# timeout = get_setting_value('NBLOOKUP_RUN_TIMEOUT')
|
||||
timeout = 20
|
||||
|
||||
# Create a database connection
|
||||
db = DB() # instance of class DB
|
||||
db.open()
|
||||
|
||||
# Initialize the Plugin obj output file
|
||||
plugin_objects = Plugin_Objects(RESULT_FILE)
|
||||
|
||||
# Create a DeviceInstance instance
|
||||
device_handler = DeviceInstance(db)
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
# Retrieve devices
|
||||
if get_setting_value("REFRESH_FQDN"):
|
||||
|
||||
@@ -15,7 +15,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
|
||||
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
|
||||
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
|
||||
from const import logPath # noqa: E402 [flake8 lint suppression]
|
||||
from database import DB # noqa: E402 [flake8 lint suppression]
|
||||
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
|
||||
import conf # noqa: E402 [flake8 lint suppression]
|
||||
from pytz import timezone # noqa: E402 [flake8 lint suppression]
|
||||
@@ -39,15 +38,11 @@ def main():
|
||||
|
||||
timeout = get_setting_value('NSLOOKUP_RUN_TIMEOUT')
|
||||
|
||||
# Create a database connection
|
||||
db = DB() # instance of class DB
|
||||
db.open()
|
||||
|
||||
# Initialize the Plugin obj output file
|
||||
plugin_objects = Plugin_Objects(RESULT_FILE)
|
||||
|
||||
# Create a DeviceInstance instance
|
||||
device_handler = DeviceInstance(db)
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
# Retrieve devices
|
||||
if get_setting_value("REFRESH_FQDN"):
|
||||
|
||||
@@ -256,13 +256,11 @@ def main():
|
||||
start_time = time.time()
|
||||
|
||||
mylog("verbose", [f"[{pluginName}] starting execution"])
|
||||
from database import DB
|
||||
|
||||
from models.device_instance import DeviceInstance
|
||||
|
||||
db = DB() # instance of class DB
|
||||
db.open()
|
||||
# Create a DeviceInstance instance
|
||||
device_handler = DeviceInstance(db)
|
||||
device_handler = DeviceInstance()
|
||||
# Retrieve configuration settings
|
||||
# these should be self-explanatory
|
||||
omada_sites = []
|
||||
|
||||
@@ -13,7 +13,6 @@ from plugin_helper import Plugin_Objects # noqa: E402 [flake8 lint suppression]
|
||||
from logger import mylog, Logger # noqa: E402 [flake8 lint suppression]
|
||||
from const import logPath # noqa: E402 [flake8 lint suppression]
|
||||
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
|
||||
from database import DB # noqa: E402 [flake8 lint suppression]
|
||||
from models.device_instance import DeviceInstance # noqa: E402 [flake8 lint suppression]
|
||||
import conf # noqa: E402 [flake8 lint suppression]
|
||||
|
||||
@@ -44,12 +43,8 @@ def main():
|
||||
|
||||
mylog('verbose', [f'[{pluginName}] broadcast_ips value {broadcast_ips}'])
|
||||
|
||||
# Create a database connection
|
||||
db = DB() # instance of class DB
|
||||
db.open()
|
||||
|
||||
# Create a DeviceInstance instance
|
||||
device_handler = DeviceInstance(db)
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
# Retrieve devices
|
||||
if 'offline' in devices_to_wake:
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
#!/bin/sh
|
||||
# 02-ensure-folders.sh - ensure /config and /db exist under /data
|
||||
|
||||
set -eu
|
||||
|
||||
YELLOW=$(printf '\033[1;33m')
|
||||
CYAN=$(printf '\033[1;36m')
|
||||
RED=$(printf '\033[1;31m')
|
||||
RESET=$(printf '\033[0m')
|
||||
|
||||
DATA_DIR=${NETALERTX_DATA:-/data}
|
||||
TARGET_CONFIG=${NETALERTX_CONFIG:-${DATA_DIR}/config}
|
||||
TARGET_DB=${NETALERTX_DB:-${DATA_DIR}/db}
|
||||
|
||||
ensure_folder() {
|
||||
my_path="$1"
|
||||
if [ ! -d "${my_path}" ]; then
|
||||
>&2 printf "%s" "${CYAN}"
|
||||
>&2 echo "Creating missing folder: ${my_path}"
|
||||
>&2 printf "%s" "${RESET}"
|
||||
mkdir -p "${my_path}" || {
|
||||
>&2 printf "%s" "${RED}"
|
||||
>&2 echo "❌ Failed to create folder: ${my_path}"
|
||||
>&2 printf "%s" "${RESET}"
|
||||
exit 1
|
||||
}
|
||||
chmod 700 "${my_path}" 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
# Ensure subfolders exist
|
||||
ensure_folder "${TARGET_CONFIG}"
|
||||
ensure_folder "${TARGET_DB}"
|
||||
|
||||
exit 0
|
||||
@@ -1,35 +0,0 @@
|
||||
#!/bin/sh
|
||||
# override-config.sh - Handles APP_CONF_OVERRIDE environment variable
|
||||
|
||||
OVERRIDE_FILE="${NETALERTX_CONFIG}/app_conf_override.json"
|
||||
|
||||
# Ensure config directory exists
|
||||
mkdir -p "$(dirname "$NETALERTX_CONFIG")" || {
|
||||
>&2 echo "ERROR: Failed to create config directory $(dirname "$NETALERTX_CONFIG")"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Remove old override file if it exists
|
||||
rm -f "$OVERRIDE_FILE"
|
||||
|
||||
# Check if APP_CONF_OVERRIDE is set
|
||||
if [ -z "$APP_CONF_OVERRIDE" ]; then
|
||||
>&2 echo "APP_CONF_OVERRIDE is not set. Skipping override config file creation."
|
||||
else
|
||||
# Save the APP_CONF_OVERRIDE env variable as a JSON file
|
||||
echo "$APP_CONF_OVERRIDE" > "$OVERRIDE_FILE" || {
|
||||
>&2 echo "ERROR: Failed to write override config to $OVERRIDE_FILE"
|
||||
exit 2
|
||||
}
|
||||
|
||||
RESET=$(printf '\033[0m')
|
||||
>&2 cat <<EOF
|
||||
══════════════════════════════════════════════════════════════════════════════
|
||||
📝 APP_CONF_OVERRIDE detected. Configuration written to $OVERRIDE_FILE.
|
||||
|
||||
Make sure the JSON content is correct before starting the application.
|
||||
══════════════════════════════════════════════════════════════════════════════
|
||||
EOF
|
||||
|
||||
>&2 printf "%s" "${RESET}"
|
||||
fi
|
||||
@@ -5,22 +5,22 @@
|
||||
|
||||
# Define ports from ENV variables, applying defaults
|
||||
PORT_APP=${PORT:-20211}
|
||||
# PORT_GQL=${APP_CONF_OVERRIDE:-${GRAPHQL_PORT:-20212}}
|
||||
PORT_GQL=${APP_CONF_OVERRIDE:-${GRAPHQL_PORT:-20212}}
|
||||
|
||||
# # Check if ports are configured to be the same
|
||||
# if [ "$PORT_APP" -eq "$PORT_GQL" ]; then
|
||||
# cat <<EOF
|
||||
# ══════════════════════════════════════════════════════════════════════════════
|
||||
# ⚠️ Configuration Warning: Both ports are set to ${PORT_APP}.
|
||||
# Check if ports are configured to be the same
|
||||
if [ "$PORT_APP" -eq "$PORT_GQL" ]; then
|
||||
cat <<EOF
|
||||
══════════════════════════════════════════════════════════════════════════════
|
||||
⚠️ Configuration Warning: Both ports are set to ${PORT_APP}.
|
||||
|
||||
# The Application port (\$PORT) and the GraphQL API port
|
||||
# (\$APP_CONF_OVERRIDE or \$GRAPHQL_PORT) are configured to use the
|
||||
# same port. This will cause a conflict.
|
||||
The Application port (\$PORT) and the GraphQL API port
|
||||
(\$APP_CONF_OVERRIDE or \$GRAPHQL_PORT) are configured to use the
|
||||
same port. This will cause a conflict.
|
||||
|
||||
# https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
|
||||
# ══════════════════════════════════════════════════════════════════════════════
|
||||
# EOF
|
||||
# fi
|
||||
https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
|
||||
══════════════════════════════════════════════════════════════════════════════
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Check for netstat (usually provided by busybox)
|
||||
if ! command -v netstat >/dev/null 2>&1; then
|
||||
@@ -53,17 +53,17 @@ if echo "$LISTENING_PORTS" | grep -q ":${PORT_APP}$"; then
|
||||
EOF
|
||||
fi
|
||||
|
||||
# # Check GraphQL Port
|
||||
# # We add a check to avoid double-warning if ports are identical AND in use
|
||||
# if [ "$PORT_APP" -ne "$PORT_GQL" ] && echo "$LISTENING_PORTS" | grep -q ":${PORT_GQL}$"; then
|
||||
# cat <<EOF
|
||||
# ══════════════════════════════════════════════════════════════════════════════
|
||||
# ⚠️ Port Warning: GraphQL API port ${PORT_GQL} is already in use.
|
||||
# Check GraphQL Port
|
||||
# We add a check to avoid double-warning if ports are identical AND in use
|
||||
if [ "$PORT_APP" -ne "$PORT_GQL" ] && echo "$LISTENING_PORTS" | grep -q ":${PORT_GQL}$"; then
|
||||
cat <<EOF
|
||||
══════════════════════════════════════════════════════════════════════════════
|
||||
⚠️ Port Warning: GraphQL API port ${PORT_GQL} is already in use.
|
||||
|
||||
# The GraphQL API (defined by \$APP_CONF_OVERRIDE or \$GRAPHQL_PORT)
|
||||
# may fail to start.
|
||||
The GraphQL API (defined by \$APP_CONF_OVERRIDE or \$GRAPHQL_PORT)
|
||||
may fail to start.
|
||||
|
||||
# https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
|
||||
# ══════════════════════════════════════════════════════════════════════════════
|
||||
# EOF
|
||||
# fi
|
||||
https://github.com/jokob-sk/NetAlertX/blob/main/docs/docker-troubleshooting/port-conflicts.md
|
||||
══════════════════════════════════════════════════════════════════════════════
|
||||
EOF
|
||||
fi
|
||||
@@ -98,6 +98,7 @@ nav:
|
||||
- Sync: API_SYNC.md
|
||||
- GraphQL: API_GRAPHQL.md
|
||||
- DB query: API_DBQUERY.md
|
||||
- MCP: API_MCP.md
|
||||
- Tests: API_TESTS.md
|
||||
- SUPERSEDED OLD API Overview: API_OLD.md
|
||||
- Integrations:
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
cryptography<40
|
||||
openwrt-luci-rpc
|
||||
asusrouter
|
||||
aiohttp
|
||||
@@ -30,3 +31,4 @@ urllib3
|
||||
httplib2
|
||||
gunicorn
|
||||
git+https://github.com/foreign-sub/aiofreepybox.git
|
||||
mcp
|
||||
|
||||
@@ -3,11 +3,12 @@ import sys
|
||||
import os
|
||||
|
||||
from flask import Flask, request, jsonify, Response
|
||||
from models.device_instance import DeviceInstance # noqa: E402
|
||||
from flask_cors import CORS
|
||||
|
||||
# Register NetAlertX directories
|
||||
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
|
||||
sys.path.extend([f"{INSTALL_PATH}/server"])
|
||||
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
|
||||
|
||||
from logger import mylog # noqa: E402 [flake8 lint suppression]
|
||||
from helper import get_setting_value # noqa: E402 [flake8 lint suppression]
|
||||
@@ -63,6 +64,12 @@ from .dbquery_endpoint import read_query, write_query, update_query, delete_quer
|
||||
from .sync_endpoint import handle_sync_post, handle_sync_get # noqa: E402 [flake8 lint suppression]
|
||||
from .logs_endpoint import clean_log # noqa: E402 [flake8 lint suppression]
|
||||
from models.user_events_queue_instance import UserEventsQueueInstance # noqa: E402 [flake8 lint suppression]
|
||||
|
||||
from models.event_instance import EventInstance # noqa: E402 [flake8 lint suppression]
|
||||
# Import tool logic from the MCP/tools module to reuse behavior (no blueprints)
|
||||
from plugin_helper import is_mac # noqa: E402 [flake8 lint suppression]
|
||||
# is_mac is provided in mcp_endpoint and used by those handlers
|
||||
# mcp_endpoint contains helper functions; routes moved into this module to keep a single place for routes
|
||||
from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
|
||||
write_notification,
|
||||
mark_all_notifications_read,
|
||||
@@ -71,9 +78,17 @@ from messaging.in_app import ( # noqa: E402 [flake8 lint suppression]
|
||||
delete_notification,
|
||||
mark_notification_as_read
|
||||
)
|
||||
from .mcp_endpoint import ( # noqa: E402 [flake8 lint suppression]
|
||||
mcp_sse,
|
||||
mcp_messages,
|
||||
openapi_spec
|
||||
)
|
||||
# tools and mcp routes have been moved into this module (api_server_start)
|
||||
|
||||
# Flask application
|
||||
app = Flask(__name__)
|
||||
|
||||
|
||||
CORS(
|
||||
app,
|
||||
resources={
|
||||
@@ -88,22 +103,61 @@ CORS(
|
||||
r"/messaging/*": {"origins": "*"},
|
||||
r"/events/*": {"origins": "*"},
|
||||
r"/logs/*": {"origins": "*"},
|
||||
r"/auth/*": {"origins": "*"}
|
||||
r"/api/tools/*": {"origins": "*"},
|
||||
r"/auth/*": {"origins": "*"},
|
||||
r"/mcp/*": {"origins": "*"}
|
||||
},
|
||||
supports_credentials=True,
|
||||
allow_headers=["Authorization", "Content-Type"],
|
||||
)
|
||||
|
||||
# -------------------------------------------------------------------------------
|
||||
# MCP bridge variables + helpers (moved from mcp_routes)
|
||||
# -------------------------------------------------------------------------------
|
||||
|
||||
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
|
||||
API_BASE_URL = f"http://localhost:{BACKEND_PORT}"
|
||||
|
||||
|
||||
@app.route('/mcp/sse', methods=['GET', 'POST'])
|
||||
def api_mcp_sse():
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
return mcp_sse()
|
||||
|
||||
|
||||
@app.route('/api/mcp/messages', methods=['POST'])
|
||||
def api_mcp_messages():
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
return mcp_messages()
|
||||
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Custom handler for 404 - Route not found
|
||||
# -------------------------------------------------------------------
|
||||
@app.before_request
|
||||
def log_request_info():
|
||||
"""Log details of every incoming request."""
|
||||
# Filter out noisy requests if needed, but user asked for drastic logging
|
||||
mylog("verbose", [f"[HTTP] {request.method} {request.path} from {request.remote_addr}"])
|
||||
# Filter sensitive headers before logging
|
||||
safe_headers = {k: v for k, v in request.headers if k.lower() not in ('authorization', 'cookie', 'x-api-key')}
|
||||
mylog("debug", [f"[HTTP] Headers: {safe_headers}"])
|
||||
if request.method == "POST":
|
||||
# Be careful with large bodies, but log first 1000 chars
|
||||
data = request.get_data(as_text=True)
|
||||
mylog("debug", [f"[HTTP] Body length: {len(data)} chars"])
|
||||
|
||||
|
||||
@app.errorhandler(404)
|
||||
def not_found(error):
|
||||
# Get the requested path from the request object instead of error.description
|
||||
requested_url = request.path if request else "unknown"
|
||||
response = {
|
||||
"success": False,
|
||||
"error": "API route not found",
|
||||
"message": f"The requested URL {error.description if hasattr(error, 'description') else ''} was not found on the server.",
|
||||
"message": f"The requested URL {requested_url} was not found on the server.",
|
||||
}
|
||||
return jsonify(response), 404
|
||||
|
||||
@@ -126,7 +180,7 @@ def graphql_endpoint():
|
||||
if not is_authorized():
|
||||
msg = '[graphql_server] Unauthorized access attempt - make sure your GRAPHQL_PORT and API_TOKEN settings are correct.'
|
||||
mylog('verbose', [msg])
|
||||
return jsonify({"success": False, "message": msg, "error": "Forbidden"}), 401
|
||||
return jsonify({"success": False, "message": msg, "error": "Forbidden"}), 403
|
||||
|
||||
# Retrieve and log request data
|
||||
data = request.get_json()
|
||||
@@ -146,11 +200,12 @@ def graphql_endpoint():
|
||||
return jsonify(response)
|
||||
|
||||
|
||||
# Tools endpoints are registered via `mcp_endpoint.tools_bp` blueprint.
|
||||
|
||||
|
||||
# --------------------------
|
||||
# Settings Endpoints
|
||||
# --------------------------
|
||||
|
||||
|
||||
@app.route("/settings/<setKey>", methods=["GET"])
|
||||
def api_get_setting(setKey):
|
||||
if not is_authorized():
|
||||
@@ -162,8 +217,7 @@ def api_get_setting(setKey):
|
||||
# --------------------------
|
||||
# Device Endpoints
|
||||
# --------------------------
|
||||
|
||||
|
||||
@app.route('/mcp/sse/device/<mac>', methods=['GET', 'POST'])
|
||||
@app.route("/device/<mac>", methods=["GET"])
|
||||
def api_get_device(mac):
|
||||
if not is_authorized():
|
||||
@@ -229,11 +283,45 @@ def api_update_device_column(mac):
|
||||
return update_device_column(mac, column_name, column_value)
|
||||
|
||||
|
||||
@app.route('/mcp/sse/device/<mac>/set-alias', methods=['POST'])
|
||||
@app.route('/device/<mac>/set-alias', methods=['POST'])
|
||||
def api_device_set_alias(mac):
|
||||
"""Set the device alias - convenience wrapper around update_device_column."""
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
data = request.get_json() or {}
|
||||
alias = data.get('alias')
|
||||
if not alias:
|
||||
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "alias is required"}), 400
|
||||
return update_device_column(mac, 'devName', alias)
|
||||
|
||||
|
||||
@app.route('/mcp/sse/device/open_ports', methods=['POST'])
|
||||
@app.route('/device/open_ports', methods=['POST'])
|
||||
def api_device_open_ports():
|
||||
"""Get stored NMAP open ports for a target IP or MAC."""
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
data = request.get_json(silent=True) or {}
|
||||
target = data.get('target')
|
||||
if not target:
|
||||
return jsonify({"success": False, "error": "Target (IP or MAC) is required"}), 400
|
||||
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
# Use DeviceInstance method to get stored open ports
|
||||
open_ports = device_handler.getOpenPorts(target)
|
||||
|
||||
if not open_ports:
|
||||
return jsonify({"success": False, "error": f"No stored open ports for {target}. Run a scan with `/nettools/trigger-scan`"}), 404
|
||||
|
||||
return jsonify({"success": True, "target": target, "open_ports": open_ports})
|
||||
|
||||
|
||||
# --------------------------
|
||||
# Devices Collections
|
||||
# --------------------------
|
||||
|
||||
|
||||
@app.route("/devices", methods=["GET"])
|
||||
def api_get_devices():
|
||||
if not is_authorized():
|
||||
@@ -289,6 +377,7 @@ def api_devices_totals():
|
||||
return devices_totals()
|
||||
|
||||
|
||||
@app.route('/mcp/sse/devices/by-status', methods=['GET', 'POST'])
|
||||
@app.route("/devices/by-status", methods=["GET"])
|
||||
def api_devices_by_status():
|
||||
if not is_authorized():
|
||||
@@ -299,15 +388,93 @@ def api_devices_by_status():
|
||||
return devices_by_status(status)
|
||||
|
||||
|
||||
@app.route('/mcp/sse/devices/search', methods=['POST'])
|
||||
@app.route('/devices/search', methods=['POST'])
|
||||
def api_devices_search():
|
||||
"""Device search: accepts 'query' in JSON and maps to device info/search."""
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
data = request.get_json(silent=True) or {}
|
||||
query = data.get('query')
|
||||
|
||||
if not query:
|
||||
return jsonify({"error": "Missing 'query' parameter"}), 400
|
||||
|
||||
if is_mac(query):
|
||||
device_data = get_device_data(query)
|
||||
if device_data.status_code == 200:
|
||||
return jsonify({"success": True, "devices": [device_data.get_json()]})
|
||||
else:
|
||||
return jsonify({"success": False, "error": "Device not found"}), 404
|
||||
|
||||
# Create fresh DB instance for this thread
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
matches = device_handler.search(query)
|
||||
|
||||
if not matches:
|
||||
return jsonify({"success": False, "error": "No devices found"}), 404
|
||||
|
||||
return jsonify({"success": True, "devices": matches})
|
||||
|
||||
|
||||
@app.route('/mcp/sse/devices/latest', methods=['GET'])
|
||||
@app.route('/devices/latest', methods=['GET'])
|
||||
def api_devices_latest():
|
||||
"""Get latest device (most recent) - maps to DeviceInstance.getLatest()."""
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
latest = device_handler.getLatest()
|
||||
|
||||
if not latest:
|
||||
return jsonify({"message": "No devices found"}), 404
|
||||
return jsonify([latest])
|
||||
|
||||
|
||||
@app.route('/mcp/sse/devices/network/topology', methods=['GET'])
|
||||
@app.route('/devices/network/topology', methods=['GET'])
|
||||
def api_devices_network_topology():
|
||||
"""Network topology mapping."""
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
result = device_handler.getNetworkTopology()
|
||||
|
||||
return jsonify(result)
|
||||
|
||||
|
||||
# --------------------------
|
||||
# Net tools
|
||||
# --------------------------
|
||||
@app.route('/mcp/sse/nettools/wakeonlan', methods=['POST'])
|
||||
@app.route("/nettools/wakeonlan", methods=["POST"])
|
||||
def api_wakeonlan():
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
mac = request.json.get("devMac")
|
||||
data = request.json or {}
|
||||
mac = data.get("devMac")
|
||||
ip = data.get("devLastIP") or data.get('ip')
|
||||
if not mac and ip:
|
||||
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
dev = device_handler.getByIP(ip)
|
||||
|
||||
if not dev or not dev.get('devMac'):
|
||||
return jsonify({"success": False, "message": "ERROR: Device not found", "error": "MAC not resolved"}), 404
|
||||
mac = dev.get('devMac')
|
||||
|
||||
# Validate that we have a valid MAC address
|
||||
if not mac:
|
||||
return jsonify({"success": False, "message": "ERROR: Missing device MAC or IP", "error": "Bad Request"}), 400
|
||||
|
||||
return wakeonlan(mac)
|
||||
|
||||
|
||||
@@ -368,11 +535,42 @@ def api_internet_info():
|
||||
return internet_info()
|
||||
|
||||
|
||||
@app.route('/mcp/sse/nettools/trigger-scan', methods=['POST'])
|
||||
@app.route("/nettools/trigger-scan", methods=["GET"])
|
||||
def api_trigger_scan():
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
data = request.get_json(silent=True) or {}
|
||||
scan_type = data.get('type', 'ARPSCAN')
|
||||
|
||||
# Validate scan type
|
||||
loaded_plugins = get_setting_value('LOADED_PLUGINS')
|
||||
if scan_type not in loaded_plugins:
|
||||
return jsonify({"success": False, "error": f"Invalid scan type. Must be one of: {', '.join(loaded_plugins)}"}), 400
|
||||
|
||||
queue = UserEventsQueueInstance()
|
||||
|
||||
action = f"run|{scan_type}"
|
||||
|
||||
queue.add_event(action)
|
||||
|
||||
return jsonify({"success": True, "message": f"Scan triggered for type: {scan_type}"}), 200
|
||||
|
||||
|
||||
# --------------------------
|
||||
# MCP Server
|
||||
# --------------------------
|
||||
@app.route('/mcp/sse/openapi.json', methods=['GET'])
|
||||
def api_openapi_spec():
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
return openapi_spec()
|
||||
|
||||
|
||||
# --------------------------
|
||||
# DB query
|
||||
# --------------------------
|
||||
|
||||
|
||||
@app.route("/dbquery/read", methods=["POST"])
|
||||
def dbquery_read():
|
||||
if not is_authorized():
|
||||
@@ -395,6 +593,7 @@ def dbquery_write():
|
||||
data = request.get_json() or {}
|
||||
raw_sql_b64 = data.get("rawSql")
|
||||
if not raw_sql_b64:
|
||||
|
||||
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "rawSql is required"}), 400
|
||||
|
||||
return write_query(raw_sql_b64)
|
||||
@@ -460,11 +659,13 @@ def api_delete_online_history():
|
||||
|
||||
@app.route("/logs", methods=["DELETE"])
|
||||
def api_clean_log():
|
||||
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
file = request.args.get("file")
|
||||
if not file:
|
||||
|
||||
return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "Missing 'file' query parameter"}), 400
|
||||
|
||||
return clean_log(file)
|
||||
@@ -499,8 +700,6 @@ def api_add_to_execution_queue():
|
||||
# --------------------------
|
||||
# Device Events
|
||||
# --------------------------
|
||||
|
||||
|
||||
@app.route("/events/create/<mac>", methods=["POST"])
|
||||
def api_create_event(mac):
|
||||
if not is_authorized():
|
||||
@@ -564,6 +763,45 @@ def api_get_events_totals():
|
||||
return get_events_totals(period)
|
||||
|
||||
|
||||
@app.route('/mcp/sse/events/recent', methods=['GET', 'POST'])
|
||||
@app.route('/events/recent', methods=['GET'])
|
||||
def api_events_default_24h():
|
||||
return api_events_recent(24) # Reuse handler
|
||||
|
||||
|
||||
@app.route('/mcp/sse/events/last', methods=['GET', 'POST'])
|
||||
@app.route('/events/last', methods=['GET'])
|
||||
def get_last_events():
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
# Create fresh DB instance for this thread
|
||||
event_handler = EventInstance()
|
||||
|
||||
events = event_handler.get_last_n(10)
|
||||
return jsonify({"success": True, "count": len(events), "events": events}), 200
|
||||
|
||||
|
||||
@app.route('/events/<int:hours>', methods=['GET'])
|
||||
def api_events_recent(hours):
|
||||
"""Return events from the last <hours> hours using EventInstance."""
|
||||
|
||||
if not is_authorized():
|
||||
return jsonify({"success": False, "message": "ERROR: Not authorized", "error": "Forbidden"}), 403
|
||||
|
||||
# Validate hours input
|
||||
if hours <= 0:
|
||||
return jsonify({"success": False, "error": "Hours must be > 0"}), 400
|
||||
try:
|
||||
# Create fresh DB instance for this thread
|
||||
event_handler = EventInstance()
|
||||
|
||||
events = event_handler.get_by_hours(hours)
|
||||
|
||||
return jsonify({"success": True, "hours": hours, "count": len(events), "events": events}), 200
|
||||
|
||||
except Exception as ex:
|
||||
return jsonify({"success": False, "error": str(ex)}), 500
|
||||
|
||||
# --------------------------
|
||||
# Sessions
|
||||
# --------------------------
|
||||
@@ -793,3 +1031,9 @@ def start_server(graphql_port, app_state):
|
||||
|
||||
# Update the state to indicate the server has started
|
||||
app_state = updateState("Process: Idle", None, None, None, 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# This block is for running the server directly for testing purposes
|
||||
# In production, start_server is called from api.py
|
||||
pass
|
||||
|
||||
@@ -228,7 +228,8 @@ def devices_totals():
|
||||
|
||||
def devices_by_status(status=None):
|
||||
"""
|
||||
Return devices filtered by status.
|
||||
Return devices filtered by status. Returns all if no status provided.
|
||||
Possible statuses: my, connected, favorites, new, down, archived
|
||||
"""
|
||||
|
||||
conn = get_temp_db_connection()
|
||||
|
||||
207
server/api_server/mcp_endpoint.py
Normal file
207
server/api_server/mcp_endpoint.py
Normal file
@@ -0,0 +1,207 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import threading
|
||||
from flask import Blueprint, request, jsonify, Response, stream_with_context
|
||||
from helper import get_setting_value
|
||||
from helper import mylog
|
||||
# from .events_endpoint import get_events # will import locally where needed
|
||||
import requests
|
||||
import json
|
||||
import uuid
|
||||
import queue
|
||||
|
||||
# Blueprints
|
||||
mcp_bp = Blueprint('mcp', __name__)
|
||||
tools_bp = Blueprint('tools', __name__)
|
||||
|
||||
mcp_sessions = {}
|
||||
mcp_sessions_lock = threading.Lock()
|
||||
|
||||
|
||||
def check_auth():
|
||||
token = request.headers.get("Authorization")
|
||||
expected_token = f"Bearer {get_setting_value('API_TOKEN')}"
|
||||
return token == expected_token
|
||||
|
||||
|
||||
# --------------------------
|
||||
# Specs
|
||||
# --------------------------
|
||||
def openapi_spec():
|
||||
# Spec matching actual available routes for MCP tools
|
||||
mylog("verbose", ["[MCP] OpenAPI spec requested"])
|
||||
spec = {
|
||||
"openapi": "3.0.0",
|
||||
"info": {"title": "NetAlertX Tools", "version": "1.1.0"},
|
||||
"servers": [{"url": "/"}],
|
||||
"paths": {
|
||||
"/devices/by-status": {"post": {"operationId": "list_devices"}},
|
||||
"/device/{mac}": {"post": {"operationId": "get_device_info"}},
|
||||
"/devices/search": {"post": {"operationId": "search_devices"}},
|
||||
"/devices/latest": {"get": {"operationId": "get_latest_device"}},
|
||||
"/nettools/trigger-scan": {"post": {"operationId": "trigger_scan"}},
|
||||
"/device/open_ports": {"post": {"operationId": "get_open_ports"}},
|
||||
"/devices/network/topology": {"get": {"operationId": "get_network_topology"}},
|
||||
"/events/recent": {"get": {"operationId": "get_recent_alerts"}, "post": {"operationId": "get_recent_alerts"}},
|
||||
"/events/last": {"get": {"operationId": "get_last_events"}, "post": {"operationId": "get_last_events"}},
|
||||
"/device/{mac}/set-alias": {"post": {"operationId": "set_device_alias"}},
|
||||
"/nettools/wakeonlan": {"post": {"operationId": "wol_wake_device"}}
|
||||
}
|
||||
}
|
||||
return jsonify(spec)
|
||||
|
||||
|
||||
# --------------------------
|
||||
# MCP SSE/JSON-RPC Endpoint
|
||||
# --------------------------
|
||||
|
||||
|
||||
# Sessions for SSE
|
||||
_openapi_spec_cache = None
|
||||
API_BASE_URL = f"http://localhost:{get_setting_value('GRAPHQL_PORT')}"
|
||||
|
||||
|
||||
def get_openapi_spec():
|
||||
global _openapi_spec_cache
|
||||
|
||||
if _openapi_spec_cache:
|
||||
return _openapi_spec_cache
|
||||
try:
|
||||
r = requests.get(f"{API_BASE_URL}/mcp/openapi.json", timeout=10)
|
||||
r.raise_for_status()
|
||||
_openapi_spec_cache = r.json()
|
||||
return _openapi_spec_cache
|
||||
except Exception as e:
|
||||
mylog("none", [f"[MCP] Failed to fetch OpenAPI spec: {e}"])
|
||||
return None
|
||||
|
||||
|
||||
def map_openapi_to_mcp_tools(spec):
|
||||
tools = []
|
||||
if not spec or 'paths' not in spec:
|
||||
return tools
|
||||
for path, methods in spec['paths'].items():
|
||||
for method, details in methods.items():
|
||||
if 'operationId' in details:
|
||||
tool = {'name': details['operationId'], 'description': details.get('description', ''), 'inputSchema': {'type': 'object', 'properties': {}, 'required': []}}
|
||||
if 'requestBody' in details:
|
||||
content = details['requestBody'].get('content', {})
|
||||
if 'application/json' in content:
|
||||
schema = content['application/json'].get('schema', {})
|
||||
tool['inputSchema'] = schema.copy()
|
||||
if 'parameters' in details:
|
||||
for param in details['parameters']:
|
||||
if param.get('in') == 'query':
|
||||
tool['inputSchema']['properties'][param['name']] = {'type': param.get('schema', {}).get('type', 'string'), 'description': param.get('description', '')}
|
||||
if param.get('required'):
|
||||
tool['inputSchema']['required'].append(param['name'])
|
||||
tools.append(tool)
|
||||
return tools
|
||||
|
||||
|
||||
def process_mcp_request(data):
|
||||
method = data.get('method')
|
||||
msg_id = data.get('id')
|
||||
if method == 'initialize':
|
||||
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'protocolVersion': '2024-11-05', 'capabilities': {'tools': {}}, 'serverInfo': {'name': 'NetAlertX', 'version': '1.0.0'}}}
|
||||
if method == 'notifications/initialized':
|
||||
return None
|
||||
if method == 'tools/list':
|
||||
spec = get_openapi_spec()
|
||||
tools = map_openapi_to_mcp_tools(spec)
|
||||
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'tools': tools}}
|
||||
if method == 'tools/call':
|
||||
params = data.get('params', {})
|
||||
tool_name = params.get('name')
|
||||
tool_args = params.get('arguments', {})
|
||||
spec = get_openapi_spec()
|
||||
target_path = None
|
||||
target_method = None
|
||||
if spec and 'paths' in spec:
|
||||
for path, methods in spec['paths'].items():
|
||||
for m, details in methods.items():
|
||||
if details.get('operationId') == tool_name:
|
||||
target_path = path
|
||||
target_method = m.upper()
|
||||
break
|
||||
if target_path:
|
||||
break
|
||||
if not target_path:
|
||||
return {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': f"Tool {tool_name} not found"}}
|
||||
try:
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
if 'Authorization' in request.headers:
|
||||
headers['Authorization'] = request.headers['Authorization']
|
||||
url = f"{API_BASE_URL}{target_path}"
|
||||
if target_method == 'POST':
|
||||
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
|
||||
else:
|
||||
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
|
||||
content = []
|
||||
try:
|
||||
json_content = api_res.json()
|
||||
content.append({'type': 'text', 'text': json.dumps(json_content, indent=2)})
|
||||
except Exception as e:
|
||||
mylog("none", [f"[MCP] Failed to parse API response as JSON: {e}"])
|
||||
content.append({'type': 'text', 'text': api_res.text})
|
||||
is_error = api_res.status_code >= 400
|
||||
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': content, 'isError': is_error}}
|
||||
except Exception as e:
|
||||
mylog("none", [f"[MCP] Error calling tool {tool_name}: {e}"])
|
||||
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {'content': [{'type': 'text', 'text': f"Error calling tool: {str(e)}"}], 'isError': True}}
|
||||
if method == 'ping':
|
||||
return {'jsonrpc': '2.0', 'id': msg_id, 'result': {}}
|
||||
if msg_id:
|
||||
return {'jsonrpc': '2.0', 'id': msg_id, 'error': {'code': -32601, 'message': 'Method not found'}}
|
||||
|
||||
|
||||
def mcp_messages():
|
||||
session_id = request.args.get('session_id')
|
||||
if not session_id:
|
||||
return jsonify({"error": "Missing session_id"}), 400
|
||||
with mcp_sessions_lock:
|
||||
if session_id not in mcp_sessions:
|
||||
return jsonify({"error": "Session not found"}), 404
|
||||
q = mcp_sessions[session_id]
|
||||
data = request.json
|
||||
if not data:
|
||||
return jsonify({"error": "Invalid JSON"}), 400
|
||||
response = process_mcp_request(data)
|
||||
if response:
|
||||
q.put(response)
|
||||
return jsonify({"status": "accepted"}), 202
|
||||
|
||||
|
||||
def mcp_sse():
|
||||
if request.method == 'POST':
|
||||
try:
|
||||
data = request.get_json(silent=True)
|
||||
if data and 'method' in data and 'jsonrpc' in data:
|
||||
response = process_mcp_request(data)
|
||||
if response:
|
||||
return jsonify(response)
|
||||
else:
|
||||
return '', 202
|
||||
except Exception as e:
|
||||
mylog("none", [f"[MCP] SSE POST processing error: {e}"])
|
||||
return jsonify({'status': 'ok', 'message': 'MCP SSE endpoint active'}), 200
|
||||
|
||||
session_id = uuid.uuid4().hex
|
||||
q = queue.Queue()
|
||||
with mcp_sessions_lock:
|
||||
mcp_sessions[session_id] = q
|
||||
|
||||
def stream():
|
||||
yield f"event: endpoint\ndata: /mcp/messages?session_id={session_id}\n\n"
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
message = q.get(timeout=20)
|
||||
yield f"event: message\ndata: {json.dumps(message)}\n\n"
|
||||
except queue.Empty:
|
||||
yield ": keep-alive\n\n"
|
||||
except GeneratorExit:
|
||||
with mcp_sessions_lock:
|
||||
if session_id in mcp_sessions:
|
||||
del mcp_sessions[session_id]
|
||||
return Response(stream_with_context(stream()), mimetype='text/event-stream')
|
||||
304
server/api_server/mcp_routes.py
Normal file
304
server/api_server/mcp_routes.py
Normal file
@@ -0,0 +1,304 @@
|
||||
"""MCP bridge routes exposing NetAlertX tool endpoints via JSON-RPC."""
|
||||
|
||||
import json
|
||||
import uuid
|
||||
import queue
|
||||
import requests
|
||||
import threading
|
||||
import logging
|
||||
from flask import Blueprint, request, Response, stream_with_context, jsonify
|
||||
from helper import get_setting_value
|
||||
|
||||
mcp_bp = Blueprint('mcp', __name__)
|
||||
|
||||
# Store active sessions: session_id -> Queue
|
||||
sessions = {}
|
||||
sessions_lock = threading.Lock()
|
||||
|
||||
# Cache for OpenAPI spec to avoid fetching on every request
|
||||
openapi_spec_cache = None
|
||||
|
||||
BACKEND_PORT = get_setting_value("GRAPHQL_PORT")
|
||||
|
||||
API_BASE_URL = f"http://localhost:{BACKEND_PORT}/api/tools"
|
||||
|
||||
|
||||
def get_openapi_spec():
|
||||
"""Fetch and cache the tools OpenAPI specification from the local API server."""
|
||||
global openapi_spec_cache
|
||||
if openapi_spec_cache:
|
||||
return openapi_spec_cache
|
||||
|
||||
try:
|
||||
# Fetch from local server
|
||||
# We use localhost because this code runs on the server
|
||||
response = requests.get(f"{API_BASE_URL}/openapi.json", timeout=10)
|
||||
response.raise_for_status()
|
||||
openapi_spec_cache = response.json()
|
||||
return openapi_spec_cache
|
||||
except Exception as e:
|
||||
print(f"Error fetching OpenAPI spec: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def map_openapi_to_mcp_tools(spec):
|
||||
"""Convert OpenAPI paths into MCP tool descriptors."""
|
||||
tools = []
|
||||
if not spec or "paths" not in spec:
|
||||
return tools
|
||||
|
||||
for path, methods in spec["paths"].items():
|
||||
for method, details in methods.items():
|
||||
if "operationId" in details:
|
||||
tool = {
|
||||
"name": details["operationId"],
|
||||
"description": details.get("description", details.get("summary", "")),
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {},
|
||||
"required": []
|
||||
}
|
||||
}
|
||||
|
||||
# Extract parameters from requestBody if present
|
||||
if "requestBody" in details:
|
||||
content = details["requestBody"].get("content", {})
|
||||
if "application/json" in content:
|
||||
schema = content["application/json"].get("schema", {})
|
||||
tool["inputSchema"] = schema.copy()
|
||||
if "properties" not in tool["inputSchema"]:
|
||||
tool["inputSchema"]["properties"] = {}
|
||||
if "required" not in tool["inputSchema"]:
|
||||
tool["inputSchema"]["required"] = []
|
||||
|
||||
# Extract parameters from 'parameters' list (query/path params) - simplistic support
|
||||
if "parameters" in details:
|
||||
for param in details["parameters"]:
|
||||
if param.get("in") == "query":
|
||||
tool["inputSchema"]["properties"][param["name"]] = {
|
||||
"type": param.get("schema", {}).get("type", "string"),
|
||||
"description": param.get("description", "")
|
||||
}
|
||||
if param.get("required"):
|
||||
if "required" not in tool["inputSchema"]:
|
||||
tool["inputSchema"]["required"] = []
|
||||
tool["inputSchema"]["required"].append(param["name"])
|
||||
|
||||
tools.append(tool)
|
||||
return tools
|
||||
|
||||
|
||||
def process_mcp_request(data):
|
||||
"""Handle incoming MCP JSON-RPC requests and route them to tools."""
|
||||
method = data.get("method")
|
||||
msg_id = data.get("id")
|
||||
|
||||
response = None
|
||||
|
||||
if method == "initialize":
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"result": {
|
||||
"protocolVersion": "2024-11-05",
|
||||
"capabilities": {
|
||||
"tools": {}
|
||||
},
|
||||
"serverInfo": {
|
||||
"name": "NetAlertX",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
elif method == "notifications/initialized":
|
||||
# No response needed for notification
|
||||
pass
|
||||
|
||||
elif method == "tools/list":
|
||||
spec = get_openapi_spec()
|
||||
tools = map_openapi_to_mcp_tools(spec)
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"result": {
|
||||
"tools": tools
|
||||
}
|
||||
}
|
||||
|
||||
elif method == "tools/call":
|
||||
params = data.get("params", {})
|
||||
tool_name = params.get("name")
|
||||
tool_args = params.get("arguments", {})
|
||||
|
||||
# Find the endpoint for this tool
|
||||
spec = get_openapi_spec()
|
||||
target_path = None
|
||||
target_method = None
|
||||
|
||||
if spec and "paths" in spec:
|
||||
for path, methods in spec["paths"].items():
|
||||
for m, details in methods.items():
|
||||
if details.get("operationId") == tool_name:
|
||||
target_path = path
|
||||
target_method = m.upper()
|
||||
break
|
||||
if target_path:
|
||||
break
|
||||
|
||||
if target_path:
|
||||
try:
|
||||
# Make the request to the local API
|
||||
# We forward the Authorization header from the incoming request if present
|
||||
headers = {
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
if "Authorization" in request.headers:
|
||||
headers["Authorization"] = request.headers["Authorization"]
|
||||
|
||||
url = f"{API_BASE_URL}{target_path}"
|
||||
|
||||
if target_method == "POST":
|
||||
api_res = requests.post(url, json=tool_args, headers=headers, timeout=30)
|
||||
elif target_method == "GET":
|
||||
api_res = requests.get(url, params=tool_args, headers=headers, timeout=30)
|
||||
else:
|
||||
api_res = None
|
||||
|
||||
if api_res:
|
||||
content = []
|
||||
try:
|
||||
json_content = api_res.json()
|
||||
content.append({
|
||||
"type": "text",
|
||||
"text": json.dumps(json_content, indent=2)
|
||||
})
|
||||
except (ValueError, json.JSONDecodeError):
|
||||
content.append({
|
||||
"type": "text",
|
||||
"text": api_res.text
|
||||
})
|
||||
|
||||
is_error = api_res.status_code >= 400
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"result": {
|
||||
"content": content,
|
||||
"isError": is_error
|
||||
}
|
||||
}
|
||||
else:
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"error": {"code": -32601, "message": f"Method {target_method} not supported"}
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"result": {
|
||||
"content": [{"type": "text", "text": f"Error calling tool: {str(e)}"}],
|
||||
"isError": True
|
||||
}
|
||||
}
|
||||
else:
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"error": {"code": -32601, "message": f"Tool {tool_name} not found"}
|
||||
}
|
||||
|
||||
elif method == "ping":
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"result": {}
|
||||
}
|
||||
|
||||
else:
|
||||
# Unknown method
|
||||
if msg_id: # Only respond if it's a request (has id)
|
||||
response = {
|
||||
"jsonrpc": "2.0",
|
||||
"id": msg_id,
|
||||
"error": {"code": -32601, "message": "Method not found"}
|
||||
}
|
||||
|
||||
return response
|
||||
|
||||
|
||||
@mcp_bp.route('/sse', methods=['GET', 'POST'])
|
||||
def handle_sse():
|
||||
"""Expose an SSE endpoint that streams MCP responses to connected clients."""
|
||||
if request.method == 'POST':
|
||||
# Handle verification or keep-alive pings
|
||||
try:
|
||||
data = request.get_json(silent=True)
|
||||
if data and "method" in data and "jsonrpc" in data:
|
||||
response = process_mcp_request(data)
|
||||
if response:
|
||||
return jsonify(response)
|
||||
else:
|
||||
# Notification or no response needed
|
||||
return "", 202
|
||||
except Exception as e:
|
||||
# Log but don't fail - malformed requests shouldn't crash the endpoint
|
||||
logging.getLogger(__name__).debug(f"SSE POST processing error: {e}")
|
||||
|
||||
return jsonify({"status": "ok", "message": "MCP SSE endpoint active"}), 200
|
||||
|
||||
session_id = uuid.uuid4().hex
|
||||
q = queue.Queue()
|
||||
|
||||
with sessions_lock:
|
||||
sessions[session_id] = q
|
||||
|
||||
def stream():
|
||||
"""Yield SSE messages for queued MCP responses until the client disconnects."""
|
||||
# Send the endpoint event
|
||||
# The client should POST to /api/mcp/messages?session_id=<session_id>
|
||||
yield f"event: endpoint\ndata: /api/mcp/messages?session_id={session_id}\n\n"
|
||||
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
# Wait for messages
|
||||
message = q.get(timeout=20) # Keep-alive timeout
|
||||
yield f"event: message\ndata: {json.dumps(message)}\n\n"
|
||||
except queue.Empty:
|
||||
# Send keep-alive comment
|
||||
yield ": keep-alive\n\n"
|
||||
except GeneratorExit:
|
||||
with sessions_lock:
|
||||
if session_id in sessions:
|
||||
del sessions[session_id]
|
||||
|
||||
return Response(stream_with_context(stream()), mimetype='text/event-stream')
|
||||
|
||||
|
||||
@mcp_bp.route('/messages', methods=['POST'])
|
||||
def handle_messages():
|
||||
"""Receive MCP JSON-RPC messages and enqueue responses for an SSE session."""
|
||||
session_id = request.args.get('session_id')
|
||||
if not session_id:
|
||||
return jsonify({"error": "Missing session_id"}), 400
|
||||
|
||||
with sessions_lock:
|
||||
if session_id not in sessions:
|
||||
return jsonify({"error": "Session not found"}), 404
|
||||
q = sessions[session_id]
|
||||
|
||||
data = request.json
|
||||
if not data:
|
||||
return jsonify({"error": "Invalid JSON"}), 400
|
||||
|
||||
response = process_mcp_request(data)
|
||||
|
||||
if response:
|
||||
q.put(response)
|
||||
|
||||
return jsonify({"status": "accepted"}), 202
|
||||
@@ -1,83 +1,134 @@
|
||||
from front.plugins.plugin_helper import is_mac
|
||||
from logger import mylog
|
||||
from models.plugin_object_instance import PluginObjectInstance
|
||||
from database import get_temp_db_connection
|
||||
|
||||
|
||||
# -------------------------------------------------------------------------------
|
||||
# Device object handling (WIP)
|
||||
# -------------------------------------------------------------------------------
|
||||
class DeviceInstance:
|
||||
def __init__(self, db):
|
||||
self.db = db
|
||||
|
||||
# Get all
|
||||
# --- helpers --------------------------------------------------------------
|
||||
def _fetchall(self, query, params=()):
|
||||
conn = get_temp_db_connection()
|
||||
rows = conn.execute(query, params).fetchall()
|
||||
conn.close()
|
||||
return [dict(r) for r in rows]
|
||||
|
||||
def _fetchone(self, query, params=()):
|
||||
conn = get_temp_db_connection()
|
||||
row = conn.execute(query, params).fetchone()
|
||||
conn.close()
|
||||
return dict(row) if row else None
|
||||
|
||||
def _execute(self, query, params=()):
|
||||
conn = get_temp_db_connection()
|
||||
cur = conn.cursor()
|
||||
cur.execute(query, params)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# --- public API -----------------------------------------------------------
|
||||
def getAll(self):
|
||||
self.db.sql.execute("""
|
||||
SELECT * FROM Devices
|
||||
""")
|
||||
return self.db.sql.fetchall()
|
||||
return self._fetchall("SELECT * FROM Devices")
|
||||
|
||||
# Get all with unknown names
|
||||
def getUnknown(self):
|
||||
self.db.sql.execute("""
|
||||
SELECT * FROM Devices WHERE devName in ("(unknown)", "(name not found)", "" )
|
||||
return self._fetchall("""
|
||||
SELECT * FROM Devices
|
||||
WHERE devName IN ("(unknown)", "(name not found)", "")
|
||||
""")
|
||||
return self.db.sql.fetchall()
|
||||
|
||||
# Get specific column value based on devMac
|
||||
def getValueWithMac(self, column_name, devMac):
|
||||
query = f"SELECT {column_name} FROM Devices WHERE devMac = ?"
|
||||
self.db.sql.execute(query, (devMac,))
|
||||
result = self.db.sql.fetchone()
|
||||
return result[column_name] if result else None
|
||||
row = self._fetchone(f"""
|
||||
SELECT {column_name} FROM Devices WHERE devMac = ?
|
||||
""", (devMac,))
|
||||
return row.get(column_name) if row else None
|
||||
|
||||
# Get all down
|
||||
def getDown(self):
|
||||
self.db.sql.execute("""
|
||||
SELECT * FROM Devices WHERE devAlertDown = 1 and devPresentLastScan = 0
|
||||
return self._fetchall("""
|
||||
SELECT * FROM Devices
|
||||
WHERE devAlertDown = 1 AND devPresentLastScan = 0
|
||||
""")
|
||||
return self.db.sql.fetchall()
|
||||
|
||||
# Get all down
|
||||
def getOffline(self):
|
||||
self.db.sql.execute("""
|
||||
SELECT * FROM Devices WHERE devPresentLastScan = 0
|
||||
return self._fetchall("""
|
||||
SELECT * FROM Devices
|
||||
WHERE devPresentLastScan = 0
|
||||
""")
|
||||
return self.db.sql.fetchall()
|
||||
|
||||
# Get a device by devGUID
|
||||
def getByGUID(self, devGUID):
|
||||
self.db.sql.execute("SELECT * FROM Devices WHERE devGUID = ?", (devGUID,))
|
||||
result = self.db.sql.fetchone()
|
||||
return dict(result) if result else None
|
||||
return self._fetchone("""
|
||||
SELECT * FROM Devices WHERE devGUID = ?
|
||||
""", (devGUID,))
|
||||
|
||||
# Check if a device exists by devGUID
|
||||
def exists(self, devGUID):
|
||||
self.db.sql.execute(
|
||||
"SELECT COUNT(*) AS count FROM Devices WHERE devGUID = ?", (devGUID,)
|
||||
)
|
||||
result = self.db.sql.fetchone()
|
||||
return result["count"] > 0
|
||||
row = self._fetchone("""
|
||||
SELECT COUNT(*) as count FROM Devices WHERE devGUID = ?
|
||||
""", (devGUID,))
|
||||
return row['count'] > 0 if row else False
|
||||
|
||||
def getByIP(self, ip):
|
||||
return self._fetchone("""
|
||||
SELECT * FROM Devices WHERE devLastIP = ?
|
||||
""", (ip,))
|
||||
|
||||
def search(self, query):
|
||||
like = f"%{query}%"
|
||||
return self._fetchall("""
|
||||
SELECT * FROM Devices
|
||||
WHERE devMac LIKE ? OR devName LIKE ? OR devLastIP LIKE ?
|
||||
""", (like, like, like))
|
||||
|
||||
def getLatest(self):
|
||||
return self._fetchone("""
|
||||
SELECT * FROM Devices
|
||||
ORDER BY devFirstConnection DESC LIMIT 1
|
||||
""")
|
||||
|
||||
def getNetworkTopology(self):
|
||||
rows = self._fetchall("""
|
||||
SELECT devName, devMac, devParentMAC, devParentPort, devVendor FROM Devices
|
||||
""")
|
||||
nodes = [{"id": r["devMac"], "name": r["devName"], "vendor": r["devVendor"]} for r in rows]
|
||||
links = [{"source": r["devParentMAC"], "target": r["devMac"], "port": r["devParentPort"]}
|
||||
for r in rows if r["devParentMAC"]]
|
||||
return {"nodes": nodes, "links": links}
|
||||
|
||||
# Update a specific field for a device
|
||||
def updateField(self, devGUID, field, value):
|
||||
if not self.exists(devGUID):
|
||||
m = f"[Device] In 'updateField': GUID {devGUID} not found."
|
||||
mylog("none", m)
|
||||
raise ValueError(m)
|
||||
msg = f"[Device] updateField: GUID {devGUID} not found"
|
||||
mylog("none", msg)
|
||||
raise ValueError(msg)
|
||||
self._execute(f"UPDATE Devices SET {field}=? WHERE devGUID=?", (value, devGUID))
|
||||
|
||||
self.db.sql.execute(
|
||||
f"""
|
||||
UPDATE Devices SET {field} = ? WHERE devGUID = ?
|
||||
""",
|
||||
(value, devGUID),
|
||||
)
|
||||
self.db.commitDB()
|
||||
|
||||
# Delete a device by devGUID
|
||||
def delete(self, devGUID):
|
||||
if not self.exists(devGUID):
|
||||
m = f"[Device] In 'delete': GUID {devGUID} not found."
|
||||
mylog("none", m)
|
||||
raise ValueError(m)
|
||||
msg = f"[Device] delete: GUID {devGUID} not found"
|
||||
mylog("none", msg)
|
||||
raise ValueError(msg)
|
||||
self._execute("DELETE FROM Devices WHERE devGUID=?", (devGUID,))
|
||||
|
||||
self.db.sql.execute("DELETE FROM Devices WHERE devGUID = ?", (devGUID,))
|
||||
self.db.commitDB()
|
||||
def resolvePrimaryID(self, target):
|
||||
if is_mac(target):
|
||||
return target.lower()
|
||||
dev = self.getByIP(target)
|
||||
return dev['devMac'].lower() if dev else None
|
||||
|
||||
def getOpenPorts(self, target):
|
||||
primary = self.resolvePrimaryID(target)
|
||||
if not primary:
|
||||
return []
|
||||
|
||||
objs = PluginObjectInstance().getByField(
|
||||
plugPrefix='NMAP',
|
||||
matchedColumn='Object_PrimaryID',
|
||||
matchedKey=primary,
|
||||
returnFields=['Object_SecondaryID', 'Watched_Value2']
|
||||
)
|
||||
|
||||
ports = []
|
||||
for o in objs:
|
||||
|
||||
port = int(o.get('Object_SecondaryID') or 0)
|
||||
|
||||
ports.append({"port": port, "service": o.get('Watched_Value2', '')})
|
||||
|
||||
return ports
|
||||
|
||||
107
server/models/event_instance.py
Normal file
107
server/models/event_instance.py
Normal file
@@ -0,0 +1,107 @@
|
||||
from datetime import datetime, timedelta
|
||||
from logger import mylog
|
||||
from database import get_temp_db_connection
|
||||
|
||||
|
||||
# -------------------------------------------------------------------------------
|
||||
# Event handling (Matches table: Events)
|
||||
# -------------------------------------------------------------------------------
|
||||
class EventInstance:
|
||||
|
||||
def _conn(self):
|
||||
"""Always return a new DB connection (thread-safe)."""
|
||||
return get_temp_db_connection()
|
||||
|
||||
def _rows_to_list(self, rows):
|
||||
return [dict(r) for r in rows]
|
||||
|
||||
# Get all events
|
||||
def get_all(self):
|
||||
conn = self._conn()
|
||||
rows = conn.execute(
|
||||
"SELECT * FROM Events ORDER BY eve_DateTime DESC"
|
||||
).fetchall()
|
||||
conn.close()
|
||||
return self._rows_to_list(rows)
|
||||
|
||||
# --- Get last n events ---
|
||||
def get_last_n(self, n=10):
|
||||
conn = self._conn()
|
||||
rows = conn.execute("""
|
||||
SELECT * FROM Events
|
||||
ORDER BY eve_DateTime DESC
|
||||
LIMIT ?
|
||||
""", (n,)).fetchall()
|
||||
conn.close()
|
||||
return self._rows_to_list(rows)
|
||||
|
||||
# --- Specific helper for last 10 ---
|
||||
def get_last(self):
|
||||
return self.get_last_n(10)
|
||||
|
||||
# Get events in the last 24h
|
||||
def get_recent(self):
|
||||
since = datetime.now() - timedelta(hours=24)
|
||||
conn = self._conn()
|
||||
rows = conn.execute("""
|
||||
SELECT * FROM Events
|
||||
WHERE eve_DateTime >= ?
|
||||
ORDER BY eve_DateTime DESC
|
||||
""", (since,)).fetchall()
|
||||
conn.close()
|
||||
return self._rows_to_list(rows)
|
||||
|
||||
# Get events from last N hours
|
||||
def get_by_hours(self, hours: int):
|
||||
if hours <= 0:
|
||||
mylog("warn", f"[Events] get_by_hours({hours}) -> invalid value")
|
||||
return []
|
||||
|
||||
since = datetime.now() - timedelta(hours=hours)
|
||||
conn = self._conn()
|
||||
rows = conn.execute("""
|
||||
SELECT * FROM Events
|
||||
WHERE eve_DateTime >= ?
|
||||
ORDER BY eve_DateTime DESC
|
||||
""", (since,)).fetchall()
|
||||
conn.close()
|
||||
return self._rows_to_list(rows)
|
||||
|
||||
# Get events in a date range
|
||||
def get_by_range(self, start: datetime, end: datetime):
|
||||
if end < start:
|
||||
mylog("error", f"[Events] get_by_range invalid: {start} > {end}")
|
||||
raise ValueError("Start must not be after end")
|
||||
|
||||
conn = self._conn()
|
||||
rows = conn.execute("""
|
||||
SELECT * FROM Events
|
||||
WHERE eve_DateTime BETWEEN ? AND ?
|
||||
ORDER BY eve_DateTime DESC
|
||||
""", (start, end)).fetchall()
|
||||
conn.close()
|
||||
return self._rows_to_list(rows)
|
||||
|
||||
# Insert new event
|
||||
def add(self, mac, ip, eventType, info="", pendingAlert=True, pairRow=None):
|
||||
conn = self._conn()
|
||||
conn.execute("""
|
||||
INSERT INTO Events (
|
||||
eve_MAC, eve_IP, eve_DateTime,
|
||||
eve_EventType, eve_AdditionalInfo,
|
||||
eve_PendingAlertEmail, eve_PairEventRowid
|
||||
) VALUES (?,?,?,?,?,?,?)
|
||||
""", (mac, ip, datetime.now(), eventType, info,
|
||||
1 if pendingAlert else 0, pairRow))
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Delete old events
|
||||
def delete_older_than(self, days: int):
|
||||
cutoff = datetime.now() - timedelta(days=days)
|
||||
conn = self._conn()
|
||||
result = conn.execute("DELETE FROM Events WHERE eve_DateTime < ?", (cutoff,))
|
||||
conn.commit()
|
||||
deleted_count = result.rowcount
|
||||
conn.close()
|
||||
return deleted_count
|
||||
@@ -1,70 +1,91 @@
|
||||
from logger import mylog
|
||||
from database import get_temp_db_connection
|
||||
|
||||
|
||||
# -------------------------------------------------------------------------------
|
||||
# Plugin object handling (WIP)
|
||||
# Plugin object handling (THREAD-SAFE REWRITE)
|
||||
# -------------------------------------------------------------------------------
|
||||
class PluginObjectInstance:
|
||||
def __init__(self, db):
|
||||
self.db = db
|
||||
|
||||
# Get all plugin objects
|
||||
# -------------- Internal DB helper wrappers --------------------------------
|
||||
def _fetchall(self, query, params=()):
|
||||
conn = get_temp_db_connection()
|
||||
rows = conn.execute(query, params).fetchall()
|
||||
conn.close()
|
||||
return [dict(r) for r in rows]
|
||||
|
||||
def _fetchone(self, query, params=()):
|
||||
conn = get_temp_db_connection()
|
||||
row = conn.execute(query, params).fetchone()
|
||||
conn.close()
|
||||
return dict(row) if row else None
|
||||
|
||||
def _execute(self, query, params=()):
|
||||
conn = get_temp_db_connection()
|
||||
conn.execute(query, params)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Public API — identical behaviour, now thread-safe + self-contained
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def getAll(self):
|
||||
self.db.sql.execute("""
|
||||
SELECT * FROM Plugins_Objects
|
||||
""")
|
||||
return self.db.sql.fetchall()
|
||||
return self._fetchall("SELECT * FROM Plugins_Objects")
|
||||
|
||||
# Get plugin object by ObjectGUID
|
||||
def getByGUID(self, ObjectGUID):
|
||||
self.db.sql.execute(
|
||||
return self._fetchone(
|
||||
"SELECT * FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,)
|
||||
)
|
||||
result = self.db.sql.fetchone()
|
||||
return dict(result) if result else None
|
||||
|
||||
# Check if a plugin object exists by ObjectGUID
|
||||
def exists(self, ObjectGUID):
|
||||
self.db.sql.execute(
|
||||
"SELECT COUNT(*) AS count FROM Plugins_Objects WHERE ObjectGUID = ?",
|
||||
(ObjectGUID,),
|
||||
)
|
||||
result = self.db.sql.fetchone()
|
||||
return result["count"] > 0
|
||||
row = self._fetchone("""
|
||||
SELECT COUNT(*) AS count FROM Plugins_Objects WHERE ObjectGUID = ?
|
||||
""", (ObjectGUID,))
|
||||
return row["count"] > 0 if row else False
|
||||
|
||||
# Get objects by plugin name
|
||||
def getByPlugin(self, plugin):
|
||||
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,))
|
||||
return self.db.sql.fetchall()
|
||||
return self._fetchall(
|
||||
"SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,)
|
||||
)
|
||||
|
||||
def getByField(self, plugPrefix, matchedColumn, matchedKey, returnFields=None):
|
||||
rows = self._fetchall(
|
||||
f"SELECT * FROM Plugins_Objects WHERE Plugin = ? AND {matchedColumn} = ?",
|
||||
(plugPrefix, matchedKey.lower())
|
||||
)
|
||||
|
||||
if not returnFields:
|
||||
return rows
|
||||
|
||||
return [{f: row.get(f) for f in returnFields} for row in rows]
|
||||
|
||||
def getByPrimary(self, plugin, primary_id):
|
||||
return self._fetchall("""
|
||||
SELECT * FROM Plugins_Objects
|
||||
WHERE Plugin = ? AND Object_PrimaryID = ?
|
||||
""", (plugin, primary_id))
|
||||
|
||||
# Get objects by status
|
||||
def getByStatus(self, status):
|
||||
self.db.sql.execute("SELECT * FROM Plugins_Objects WHERE Status = ?", (status,))
|
||||
return self.db.sql.fetchall()
|
||||
return self._fetchall("""
|
||||
SELECT * FROM Plugins_Objects WHERE Status = ?
|
||||
""", (status,))
|
||||
|
||||
# Update a specific field for a plugin object
|
||||
def updateField(self, ObjectGUID, field, value):
|
||||
if not self.exists(ObjectGUID):
|
||||
m = f"[PluginObject] In 'updateField': GUID {ObjectGUID} not found."
|
||||
mylog("none", m)
|
||||
raise ValueError(m)
|
||||
msg = f"[PluginObject] updateField: GUID {ObjectGUID} not found."
|
||||
mylog("none", msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
self.db.sql.execute(
|
||||
f"""
|
||||
UPDATE Plugins_Objects SET {field} = ? WHERE ObjectGUID = ?
|
||||
""",
|
||||
(value, ObjectGUID),
|
||||
self._execute(
|
||||
f"UPDATE Plugins_Objects SET {field}=? WHERE ObjectGUID=?",
|
||||
(value, ObjectGUID)
|
||||
)
|
||||
self.db.commitDB()
|
||||
|
||||
# Delete a plugin object by ObjectGUID
|
||||
def delete(self, ObjectGUID):
|
||||
if not self.exists(ObjectGUID):
|
||||
m = f"[PluginObject] In 'delete': GUID {ObjectGUID} not found."
|
||||
mylog("none", m)
|
||||
raise ValueError(m)
|
||||
msg = f"[PluginObject] delete: GUID {ObjectGUID} not found."
|
||||
mylog("none", msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
self.db.sql.execute(
|
||||
"DELETE FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,)
|
||||
)
|
||||
self.db.commitDB()
|
||||
self._execute("DELETE FROM Plugins_Objects WHERE ObjectGUID=?", (ObjectGUID,))
|
||||
|
||||
@@ -650,7 +650,7 @@ def update_devices_names(pm):
|
||||
|
||||
sql = pm.db.sql
|
||||
resolver = NameResolver(pm.db)
|
||||
device_handler = DeviceInstance(pm.db)
|
||||
device_handler = DeviceInstance()
|
||||
|
||||
nameNotFound = "(name not found)"
|
||||
|
||||
|
||||
@@ -42,13 +42,13 @@ class UpdateFieldAction(Action):
|
||||
# currently unused
|
||||
if isinstance(obj, dict) and "ObjectGUID" in obj:
|
||||
mylog("debug", f"[WF] Updating Object '{obj}' ")
|
||||
plugin_instance = PluginObjectInstance(self.db)
|
||||
plugin_instance = PluginObjectInstance()
|
||||
plugin_instance.updateField(obj["ObjectGUID"], self.field, self.value)
|
||||
processed = True
|
||||
|
||||
elif isinstance(obj, dict) and "devGUID" in obj:
|
||||
mylog("debug", f"[WF] Updating Device '{obj}' ")
|
||||
device_instance = DeviceInstance(self.db)
|
||||
device_instance = DeviceInstance()
|
||||
device_instance.updateField(obj["devGUID"], self.field, self.value)
|
||||
processed = True
|
||||
|
||||
@@ -79,13 +79,13 @@ class DeleteObjectAction(Action):
|
||||
# currently unused
|
||||
if isinstance(obj, dict) and "ObjectGUID" in obj:
|
||||
mylog("debug", f"[WF] Updating Object '{obj}' ")
|
||||
plugin_instance = PluginObjectInstance(self.db)
|
||||
plugin_instance = PluginObjectInstance()
|
||||
plugin_instance.delete(obj["ObjectGUID"])
|
||||
processed = True
|
||||
|
||||
elif isinstance(obj, dict) and "devGUID" in obj:
|
||||
mylog("debug", f"[WF] Updating Device '{obj}' ")
|
||||
device_instance = DeviceInstance(self.db)
|
||||
device_instance = DeviceInstance()
|
||||
device_instance.delete(obj["devGUID"])
|
||||
processed = True
|
||||
|
||||
|
||||
306
test/api_endpoints/test_mcp_tools_endpoints.py
Normal file
306
test/api_endpoints/test_mcp_tools_endpoints.py
Normal file
@@ -0,0 +1,306 @@
|
||||
import sys
|
||||
import os
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from datetime import datetime
|
||||
|
||||
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
|
||||
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
|
||||
|
||||
from helper import get_setting_value # noqa: E402
|
||||
from api_server.api_server_start import app # noqa: E402
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def api_token():
|
||||
return get_setting_value("API_TOKEN")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client():
|
||||
with app.test_client() as client:
|
||||
yield client
|
||||
|
||||
|
||||
def auth_headers(token):
|
||||
return {"Authorization": f"Bearer {token}"}
|
||||
|
||||
|
||||
# --- Device Search Tests ---
|
||||
|
||||
@patch('models.device_instance.get_temp_db_connection')
|
||||
def test_get_device_info_ip_partial(mock_db_conn, client, api_token):
|
||||
"""Test device search with partial IP search."""
|
||||
# Mock database connection - DeviceInstance._fetchall calls conn.execute().fetchall()
|
||||
mock_conn = MagicMock()
|
||||
mock_execute_result = MagicMock()
|
||||
mock_execute_result.fetchall.return_value = [
|
||||
{"devName": "Test Device", "devMac": "AA:BB:CC:DD:EE:FF", "devLastIP": "192.168.1.50"}
|
||||
]
|
||||
mock_conn.execute.return_value = mock_execute_result
|
||||
mock_db_conn.return_value = mock_conn
|
||||
|
||||
payload = {"query": ".50"}
|
||||
response = client.post('/devices/search',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is True
|
||||
assert len(data["devices"]) == 1
|
||||
assert data["devices"][0]["devLastIP"] == "192.168.1.50"
|
||||
|
||||
|
||||
# --- Trigger Scan Tests ---
|
||||
|
||||
@patch('api_server.api_server_start.UserEventsQueueInstance')
|
||||
def test_trigger_scan_ARPSCAN(mock_queue_class, client, api_token):
|
||||
"""Test trigger_scan with ARPSCAN type."""
|
||||
mock_queue = MagicMock()
|
||||
mock_queue_class.return_value = mock_queue
|
||||
|
||||
payload = {"type": "ARPSCAN"}
|
||||
response = client.post('/mcp/sse/nettools/trigger-scan',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is True
|
||||
mock_queue.add_event.assert_called_once()
|
||||
call_args = mock_queue.add_event.call_args[0]
|
||||
assert "run|ARPSCAN" in call_args[0]
|
||||
|
||||
|
||||
@patch('api_server.api_server_start.UserEventsQueueInstance')
|
||||
def test_trigger_scan_invalid_type(mock_queue_class, client, api_token):
|
||||
"""Test trigger_scan with invalid scan type."""
|
||||
mock_queue = MagicMock()
|
||||
mock_queue_class.return_value = mock_queue
|
||||
|
||||
payload = {"type": "invalid_type", "target": "192.168.1.0/24"}
|
||||
response = client.post('/mcp/sse/nettools/trigger-scan',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 400
|
||||
data = response.get_json()
|
||||
assert data["success"] is False
|
||||
|
||||
|
||||
# --- get_open_ports Tests ---
|
||||
|
||||
|
||||
@patch('models.plugin_object_instance.get_temp_db_connection')
|
||||
@patch('models.device_instance.get_temp_db_connection')
|
||||
def test_get_open_ports_ip(mock_plugin_db_conn, mock_device_db_conn, client, api_token):
|
||||
"""Test get_open_ports with an IP address."""
|
||||
# Mock database connections for both device lookup and plugin objects
|
||||
mock_conn = MagicMock()
|
||||
mock_execute_result = MagicMock()
|
||||
|
||||
# Mock for PluginObjectInstance.getByField (returns port data)
|
||||
mock_execute_result.fetchall.return_value = [
|
||||
{"Object_SecondaryID": "22", "Watched_Value2": "ssh"},
|
||||
{"Object_SecondaryID": "80", "Watched_Value2": "http"}
|
||||
]
|
||||
# Mock for DeviceInstance.getByIP (returns device with MAC)
|
||||
mock_execute_result.fetchone.return_value = {"devMac": "AA:BB:CC:DD:EE:FF"}
|
||||
|
||||
mock_conn.execute.return_value = mock_execute_result
|
||||
mock_plugin_db_conn.return_value = mock_conn
|
||||
mock_device_db_conn.return_value = mock_conn
|
||||
|
||||
payload = {"target": "192.168.1.1"}
|
||||
response = client.post('/device/open_ports',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is True
|
||||
assert len(data["open_ports"]) == 2
|
||||
assert data["open_ports"][0]["port"] == 22
|
||||
assert data["open_ports"][1]["service"] == "http"
|
||||
|
||||
|
||||
@patch('models.plugin_object_instance.get_temp_db_connection')
|
||||
def test_get_open_ports_mac_resolve(mock_plugin_db_conn, client, api_token):
|
||||
"""Test get_open_ports with a MAC address that resolves to an IP."""
|
||||
# Mock database connection for MAC-based open ports query
|
||||
mock_conn = MagicMock()
|
||||
mock_execute_result = MagicMock()
|
||||
mock_execute_result.fetchall.return_value = [
|
||||
{"Object_SecondaryID": "80", "Watched_Value2": "http"}
|
||||
]
|
||||
mock_conn.execute.return_value = mock_execute_result
|
||||
mock_plugin_db_conn.return_value = mock_conn
|
||||
|
||||
payload = {"target": "AA:BB:CC:DD:EE:FF"}
|
||||
response = client.post('/device/open_ports',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is True
|
||||
assert "target" in data
|
||||
assert len(data["open_ports"]) == 1
|
||||
assert data["open_ports"][0]["port"] == 80
|
||||
|
||||
|
||||
# --- get_network_topology Tests ---
|
||||
@patch('models.device_instance.get_temp_db_connection')
|
||||
def test_get_network_topology(mock_db_conn, client, api_token):
|
||||
"""Test get_network_topology."""
|
||||
# Mock database connection for topology query
|
||||
mock_conn = MagicMock()
|
||||
mock_execute_result = MagicMock()
|
||||
mock_execute_result.fetchall.return_value = [
|
||||
{"devName": "Router", "devMac": "AA:AA:AA:AA:AA:AA", "devParentMAC": None, "devParentPort": None, "devVendor": "VendorA"},
|
||||
{"devName": "Device1", "devMac": "BB:BB:BB:BB:BB:BB", "devParentMAC": "AA:AA:AA:AA:AA:AA", "devParentPort": "eth1", "devVendor": "VendorB"}
|
||||
]
|
||||
mock_conn.execute.return_value = mock_execute_result
|
||||
mock_db_conn.return_value = mock_conn
|
||||
|
||||
response = client.get('/devices/network/topology',
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert len(data["nodes"]) == 2
|
||||
assert len(data["links"]) == 1
|
||||
assert data["links"][0]["source"] == "AA:AA:AA:AA:AA:AA"
|
||||
assert data["links"][0]["target"] == "BB:BB:BB:BB:BB:BB"
|
||||
|
||||
|
||||
# --- get_recent_alerts Tests ---
|
||||
@patch('models.event_instance.get_temp_db_connection')
|
||||
def test_get_recent_alerts(mock_db_conn, client, api_token):
|
||||
"""Test get_recent_alerts."""
|
||||
# Mock database connection for events query
|
||||
mock_conn = MagicMock()
|
||||
mock_execute_result = MagicMock()
|
||||
now = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
|
||||
mock_execute_result.fetchall.return_value = [
|
||||
{"eve_DateTime": now, "eve_EventType": "New Device", "eve_MAC": "AA:BB:CC:DD:EE:FF"}
|
||||
]
|
||||
mock_conn.execute.return_value = mock_execute_result
|
||||
mock_db_conn.return_value = mock_conn
|
||||
|
||||
response = client.get('/events/recent',
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is True
|
||||
assert data["hours"] == 24
|
||||
|
||||
|
||||
# --- Device Alias Tests ---
|
||||
|
||||
@patch('api_server.api_server_start.update_device_column')
|
||||
def test_set_device_alias(mock_update_col, client, api_token):
|
||||
"""Test set_device_alias."""
|
||||
mock_update_col.return_value = {"success": True, "message": "Device alias updated"}
|
||||
|
||||
payload = {"alias": "New Device Name"}
|
||||
response = client.post('/device/AA:BB:CC:DD:EE:FF/set-alias',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is True
|
||||
mock_update_col.assert_called_once_with("AA:BB:CC:DD:EE:FF", "devName", "New Device Name")
|
||||
|
||||
|
||||
@patch('api_server.api_server_start.update_device_column')
|
||||
def test_set_device_alias_not_found(mock_update_col, client, api_token):
|
||||
"""Test set_device_alias when device is not found."""
|
||||
mock_update_col.return_value = {"success": False, "error": "Device not found"}
|
||||
|
||||
payload = {"alias": "New Device Name"}
|
||||
response = client.post('/device/FF:FF:FF:FF:FF:FF/set-alias',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is False
|
||||
assert "Device not found" in data["error"]
|
||||
|
||||
|
||||
# --- Wake-on-LAN Tests ---
|
||||
|
||||
@patch('api_server.api_server_start.wakeonlan')
|
||||
def test_wol_wake_device(mock_wakeonlan, client, api_token):
|
||||
"""Test wol_wake_device."""
|
||||
mock_wakeonlan.return_value = {"success": True, "message": "WOL packet sent to AA:BB:CC:DD:EE:FF"}
|
||||
|
||||
payload = {"devMac": "AA:BB:CC:DD:EE:FF"}
|
||||
response = client.post('/nettools/wakeonlan',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert data["success"] is True
|
||||
assert "AA:BB:CC:DD:EE:FF" in data["message"]
|
||||
|
||||
|
||||
def test_wol_wake_device_invalid_mac(client, api_token):
|
||||
"""Test wol_wake_device with invalid MAC."""
|
||||
payload = {"devMac": "invalid-mac"}
|
||||
response = client.post('/nettools/wakeonlan',
|
||||
json=payload,
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 400
|
||||
data = response.get_json()
|
||||
assert data["success"] is False
|
||||
|
||||
|
||||
# --- OpenAPI Spec Tests ---
|
||||
|
||||
# --- Latest Device Tests ---
|
||||
|
||||
@patch('models.device_instance.get_temp_db_connection')
|
||||
def test_get_latest_device(mock_db_conn, client, api_token):
|
||||
"""Test get_latest_device endpoint."""
|
||||
# Mock database connection for latest device query
|
||||
mock_conn = MagicMock()
|
||||
mock_execute_result = MagicMock()
|
||||
mock_execute_result.fetchone.return_value = {
|
||||
"devName": "Latest Device",
|
||||
"devMac": "AA:BB:CC:DD:EE:FF",
|
||||
"devLastIP": "192.168.1.100",
|
||||
"devFirstConnection": "2025-12-07 10:30:00"
|
||||
}
|
||||
mock_conn.execute.return_value = mock_execute_result
|
||||
mock_db_conn.return_value = mock_conn
|
||||
|
||||
response = client.get('/devices/latest',
|
||||
headers=auth_headers(api_token))
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert len(data) == 1
|
||||
assert data[0]["devName"] == "Latest Device"
|
||||
assert data[0]["devMac"] == "AA:BB:CC:DD:EE:FF"
|
||||
|
||||
|
||||
def test_openapi_spec(client, api_token):
|
||||
"""Test openapi_spec endpoint contains MCP tool paths."""
|
||||
response = client.get('/mcp/sse/openapi.json', headers=auth_headers(api_token))
|
||||
assert response.status_code == 200
|
||||
spec = response.get_json()
|
||||
|
||||
# Check for MCP tool endpoints in the spec with correct paths
|
||||
assert "/nettools/trigger-scan" in spec["paths"]
|
||||
assert "/device/open_ports" in spec["paths"]
|
||||
assert "/devices/network/topology" in spec["paths"]
|
||||
assert "/events/recent" in spec["paths"]
|
||||
assert "/device/{mac}/set-alias" in spec["paths"]
|
||||
assert "/nettools/wakeonlan" in spec["paths"]
|
||||
Reference in New Issue
Block a user