mirror of
https://github.com/jokob-sk/NetAlertX.git
synced 2025-12-07 09:36:05 -08:00
Compare commits
650 Commits
v25.7.30
...
next_relea
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dfd2cf9e20 | ||
|
|
61824abb9f | ||
|
|
33c5548fe1 | ||
|
|
fd41c395ae | ||
|
|
1a980844f0 | ||
|
|
82e018e284 | ||
|
|
e0e1233b1c | ||
|
|
74677f940e | ||
|
|
21a4d20579 | ||
|
|
9634e4e0f7 | ||
|
|
00a47ab5d3 | ||
|
|
59b417705e | ||
|
|
525d082f3d | ||
|
|
ba3481759b | ||
|
|
7125cea29b | ||
|
|
8586c5a307 | ||
|
|
0d81315809 | ||
|
|
8f193f1e2c | ||
|
|
b1eef8aa09 | ||
|
|
2da17f272c | ||
|
|
7bcb4586b2 | ||
|
|
d3326b3362 | ||
|
|
b9d3f430fe | ||
|
|
067336dcc1 | ||
|
|
8acb0a876a | ||
|
|
d1be41eca4 | ||
|
|
00e953a7ce | ||
|
|
b9ef9ad041 | ||
|
|
e90fbf17d3 | ||
|
|
139447b253 | ||
|
|
fa9fc2c8e3 | ||
|
|
30071c6848 | ||
|
|
b0bd3c8191 | ||
|
|
c753da9e15 | ||
|
|
4770ee5942 | ||
|
|
5cd53bc8f9 | ||
|
|
5e47ccc9ef | ||
|
|
f5d7c0f9a0 | ||
|
|
35b7e80be4 | ||
|
|
07eeac0a0b | ||
|
|
240d86bf1e | ||
|
|
274fd50a92 | ||
|
|
bbf49c3686 | ||
|
|
e3458630ba | ||
|
|
2f6f1e49e9 | ||
|
|
4f5a40ffce | ||
|
|
f5aea55b29 | ||
|
|
e3e7e2f52e | ||
|
|
872ac1ce0f | ||
|
|
ebeb7a07af | ||
|
|
5c14b34a8b | ||
|
|
f0abd500d9 | ||
|
|
8503cb86f1 | ||
|
|
5f0b670a82 | ||
|
|
9df814e351 | ||
|
|
88509ce8c2 | ||
|
|
995c371f48 | ||
|
|
aee5e04b9f | ||
|
|
e0c96052bb | ||
|
|
fd5235dd0a | ||
|
|
f3de66a287 | ||
|
|
9a4fb35ea5 | ||
|
|
a1ad904042 | ||
|
|
81ff1da756 | ||
|
|
85c9b0b99b | ||
|
|
4ccac66a73 | ||
|
|
c7b9fdaff2 | ||
|
|
c7dcc20a1d | ||
|
|
bb365a5e81 | ||
|
|
e2633d0251 | ||
|
|
09c40e76b2 | ||
|
|
abc3e71440 | ||
|
|
d13596c35c | ||
|
|
7d5dcf061c | ||
|
|
6206e483a9 | ||
|
|
f1ecc61de3 | ||
|
|
92a6a3a916 | ||
|
|
8a89f3b340 | ||
|
|
a93e87493f | ||
|
|
c7032bceba | ||
|
|
0cd7528284 | ||
|
|
2309b8eb3f | ||
|
|
dbd1bdabc2 | ||
|
|
093d595fc5 | ||
|
|
c38758d61a | ||
|
|
6034b12af6 | ||
|
|
972654dc78 | ||
|
|
ec417b0dac | ||
|
|
2e9352dc12 | ||
|
|
566b263d0a | ||
|
|
61b42b4fea | ||
|
|
a45de018fb | ||
|
|
bfe6987867 | ||
|
|
b6567ab5fc | ||
|
|
f71c2fbe94 | ||
|
|
aeb03f50ba | ||
|
|
734db423ee | ||
|
|
4f47dbfe14 | ||
|
|
d23bf45310 | ||
|
|
9c366881f1 | ||
|
|
9dd482618b | ||
|
|
84cc01566d | ||
|
|
ac7b912b45 | ||
|
|
62852f1b2f | ||
|
|
b659a0f06d | ||
|
|
fb3620a378 | ||
|
|
9d56e13818 | ||
|
|
43c5a11271 | ||
|
|
ac957ce599 | ||
|
|
3567906fcd | ||
|
|
be6801d98f | ||
|
|
bb9b242d0a | ||
|
|
5f27d3b9aa | ||
|
|
93af0e9d19 | ||
|
|
398e2a896f | ||
|
|
a98bac331d | ||
|
|
9f6086e5cf | ||
|
|
c5a1f19567 | ||
|
|
6d70a8a71d | ||
|
|
4161261c43 | ||
|
|
179821a527 | ||
|
|
2028b1a6e3 | ||
|
|
5b871865db | ||
|
|
76bcec335d | ||
|
|
8483a741b4 | ||
|
|
68c8e16828 | ||
|
|
76150b2ca7 | ||
|
|
5cf8a25bae | ||
|
|
593aa16f17 | ||
|
|
af9793c2ed | ||
|
|
552d2a8286 | ||
|
|
7822b11d51 | ||
|
|
cbe5a4a732 | ||
|
|
58de31d0ea | ||
|
|
5c06dc68c6 | ||
|
|
44d65cca96 | ||
|
|
71e0d13bef | ||
|
|
30269a6a73 | ||
|
|
6374219e05 | ||
|
|
6e745fc6d1 | ||
|
|
85aa04c490 | ||
|
|
1fd8d97d56 | ||
|
|
286d5555d2 | ||
|
|
57096a9258 | ||
|
|
c08eb1dbba | ||
|
|
746f1a8922 | ||
|
|
0845b7f445 | ||
|
|
a6fffe06b7 | ||
|
|
ea8cea16c5 | ||
|
|
5452b7287b | ||
|
|
80d7ef7f24 | ||
|
|
dc4da5b4c9 | ||
|
|
59477e7b38 | ||
|
|
6dd7251c84 | ||
|
|
c52e44f90c | ||
|
|
db18ca76b4 | ||
|
|
288427c939 | ||
|
|
90a07c61eb | ||
|
|
13341e35c9 | ||
|
|
4c92a941a8 | ||
|
|
4cec88aaad | ||
|
|
031d810566 | ||
|
|
b806f84946 | ||
|
|
7c90c2e93c | ||
|
|
cb69990734 | ||
|
|
7037cf1bc6 | ||
|
|
a27ee5c2f2 | ||
|
|
c3c570ef5f | ||
|
|
71646e1645 | ||
|
|
2215272e78 | ||
|
|
dde542c484 | ||
|
|
23a0fac973 | ||
|
|
2fdeccebe1 | ||
|
|
db5381db14 | ||
|
|
f1fbc47508 | ||
|
|
2a9d352322 | ||
|
|
51aa3d4a2e | ||
|
|
70373b1fbd | ||
|
|
e7ed9e0896 | ||
|
|
79887f0bd7 | ||
|
|
a6bc96d2dd | ||
|
|
8edef9e852 | ||
|
|
1e63cec37c | ||
|
|
ff96d38339 | ||
|
|
537be0f848 | ||
|
|
b89917ca3e | ||
|
|
daea3a2cd7 | ||
|
|
b86f636b12 | ||
|
|
0b08995223 | ||
|
|
f42186b616 | ||
|
|
bc9fb6bcde | ||
|
|
88f889f03e | ||
|
|
533c99eb61 | ||
|
|
afa257f245 | ||
|
|
78ab0fbd2d | ||
|
|
64e4586be6 | ||
|
|
2f7d9a02ae | ||
|
|
d29700acf8 | ||
|
|
75072dad5f | ||
|
|
19b1fc960c | ||
|
|
63d6410bb4 | ||
|
|
b89a44d0ec | ||
|
|
929eb1626b | ||
|
|
8cb1836777 | ||
|
|
512dedff4e | ||
|
|
2a2782b4c7 | ||
|
|
b726518f87 | ||
|
|
274becab97 | ||
|
|
869f28b036 | ||
|
|
f81a1b93f9 | ||
|
|
58fe531393 | ||
|
|
8da136f192 | ||
|
|
50f9277e5e | ||
|
|
7ca9d2a6c5 | ||
|
|
b76272bbdc | ||
|
|
fba5359839 | ||
|
|
55171e06b6 | ||
|
|
22aa995fc5 | ||
|
|
af80cff8e0 | ||
|
|
647defb4cc | ||
|
|
2148a7ffc5 | ||
|
|
ea5e2361da | ||
|
|
0079ece1e2 | ||
|
|
61de63771b | ||
|
|
57f3d6f7ab | ||
|
|
2e76ff1df7 | ||
|
|
8d4c7ea074 | ||
|
|
b4027b6eee | ||
|
|
b36b3be176 | ||
|
|
7ddb7d293e | ||
|
|
40341a856f | ||
|
|
304d4d0837 | ||
|
|
a353acff2d | ||
|
|
6afa52e604 | ||
|
|
5962312afd | ||
|
|
3ba410053e | ||
|
|
a6ac492d76 | ||
|
|
4d148f35ce | ||
|
|
9b0f45b88b | ||
|
|
84183f09ad | ||
|
|
5dba0f1ca1 | ||
|
|
095372a22b | ||
|
|
d8c2dc0563 | ||
|
|
cfffaf4503 | ||
|
|
01b64cce66 | ||
|
|
63c4b0d7c2 | ||
|
|
5ec35aa50e | ||
|
|
ededd39d5b | ||
|
|
15bc1635c2 | ||
|
|
74a67e3b38 | ||
|
|
52b747be0b | ||
|
|
d2c28f6a28 | ||
|
|
816b9076ae | ||
|
|
fb02774814 | ||
|
|
26632277d4 | ||
|
|
dfc64fd85f | ||
|
|
b44369a493 | ||
|
|
8ada2c36f9 | ||
|
|
c4a041e6e1 | ||
|
|
170aeb041f | ||
|
|
fe69972caa | ||
|
|
32f9111f66 | ||
|
|
bb35417213 | ||
|
|
fe69bc4afd | ||
|
|
05890b3ddf | ||
|
|
c27886521a | ||
|
|
7f74c2d6f3 | ||
|
|
5a63b7243b | ||
|
|
0897c05200 | ||
|
|
7a3bf6716c | ||
|
|
edd5bd27b0 | ||
|
|
3b7830b922 | ||
|
|
356cacab2b | ||
|
|
d12ffb31ec | ||
|
|
f70d3f3b76 | ||
|
|
27899469af | ||
|
|
59c7d7b415 | ||
|
|
0851680ef6 | ||
|
|
1af19fe9fd | ||
|
|
ce8bb53bc8 | ||
|
|
5636a159b8 | ||
|
|
6a20128960 | ||
|
|
05f083730b | ||
|
|
3441f77a78 | ||
|
|
d6bcb27c42 | ||
|
|
5d7af88130 | ||
|
|
6f2e556112 | ||
|
|
ea4c70ee7f | ||
|
|
5ed46da1dc | ||
|
|
628f35c15d | ||
|
|
066fecfd88 | ||
|
|
660f0c2c48 | ||
|
|
999feb27f9 | ||
|
|
86bf0a3672 | ||
|
|
8eab7eeae9 | ||
|
|
84f1283cd0 | ||
|
|
dcf250d36f | ||
|
|
131c0c0f4b | ||
|
|
a58b3e35b9 | ||
|
|
14be7a2bcc | ||
|
|
9b3ddda381 | ||
|
|
1f46f204bc | ||
|
|
80c1459442 | ||
|
|
62536e4bfb | ||
|
|
028335c1a9 | ||
|
|
7483e46dce | ||
|
|
c1b573f1db | ||
|
|
d11c9d7c4a | ||
|
|
b916542584 | ||
|
|
6da3cfdcb9 | ||
|
|
d38e77f801 | ||
|
|
18eaee4906 | ||
|
|
59e7463832 | ||
|
|
dc444117b6 | ||
|
|
a3dae0817a | ||
|
|
e733f8a089 | ||
|
|
ad0ddda943 | ||
|
|
28e0e4aab4 | ||
|
|
324cde9c4a | ||
|
|
f57ec74cc1 | ||
|
|
de92c9563e | ||
|
|
3686a4a07e | ||
|
|
44ba9455b6 | ||
|
|
5109a0881d | ||
|
|
1be91559d2 | ||
|
|
3bf6ce698a | ||
|
|
1532256bac | ||
|
|
a8b62dee03 | ||
|
|
fe434b41ae | ||
|
|
e4d3a50391 | ||
|
|
b59bca2967 | ||
|
|
8ae0367e8e | ||
|
|
0cb038d1c1 | ||
|
|
fe018fb3c3 | ||
|
|
161723ae35 | ||
|
|
6b3f02fcc6 | ||
|
|
ffc45c5a8d | ||
|
|
902e5360e5 | ||
|
|
0093441457 | ||
|
|
45fa9a4ca8 | ||
|
|
be73e3a7f5 | ||
|
|
016a6adf42 | ||
|
|
5533beb76d | ||
|
|
558ab44d3f | ||
|
|
33093dba65 | ||
|
|
81ac72bbd6 | ||
|
|
b5062f6838 | ||
|
|
417081242f | ||
|
|
314b7e0974 | ||
|
|
41e9276ebb | ||
|
|
333d23d704 | ||
|
|
6e24d9b5f7 | ||
|
|
d73a3ebe66 | ||
|
|
491c202eba | ||
|
|
611911b5dd | ||
|
|
e242de0ddf | ||
|
|
086cd30355 | ||
|
|
9b76f3c273 | ||
|
|
d05ddafdd3 | ||
|
|
bdaa53cc53 | ||
|
|
b2428803a5 | ||
|
|
290b6c6f3b | ||
|
|
fc72abca85 | ||
|
|
2b52d5aec4 | ||
|
|
ada92715a8 | ||
|
|
ab3f9046d2 | ||
|
|
521bf54123 | ||
|
|
1e04e9f571 | ||
|
|
c81a054d89 | ||
|
|
42eae405ae | ||
|
|
33aa8492bb | ||
|
|
d7e6ff2688 | ||
|
|
b34269d043 | ||
|
|
683f4e6c2d | ||
|
|
35cd8003b8 | ||
|
|
98d69e1ce8 | ||
|
|
70d63febda | ||
|
|
dd113f7940 | ||
|
|
0aceb097ba | ||
|
|
7790530d08 | ||
|
|
79cec583d9 | ||
|
|
dd91d1e7da | ||
|
|
aad5bec7e2 | ||
|
|
a9841157a7 | ||
|
|
1c2721549b | ||
|
|
4534ab053d | ||
|
|
cdee9b3b0d | ||
|
|
55cfced3f6 | ||
|
|
af6394a334 | ||
|
|
d9ecffdd22 | ||
|
|
5f0a482556 | ||
|
|
09c345796f | ||
|
|
e7d067dd38 | ||
|
|
223aa29d4d | ||
|
|
21e770a4bd | ||
|
|
c086ac3cf8 | ||
|
|
0cd1dc8987 | ||
|
|
f900f3f0d5 | ||
|
|
044035ef62 | ||
|
|
5f772b3e0f | ||
|
|
dc4848acd0 | ||
|
|
7015ba2f86 | ||
|
|
c6efe5ac06 | ||
|
|
8485f6fe48 | ||
|
|
e3327d8718 | ||
|
|
af986aa540 | ||
|
|
06c38322ed | ||
|
|
3ece89379f | ||
|
|
d182a552b8 | ||
|
|
b47df7b33f | ||
|
|
46097bb6e8 | ||
|
|
c5d7480e6c | ||
|
|
d9fedddae2 | ||
|
|
1fc015fe2d | ||
|
|
5395524511 | ||
|
|
4fef4a7dd4 | ||
|
|
2c8fa55edb | ||
|
|
246777a290 | ||
|
|
2def3f1dac | ||
|
|
2419a268b2 | ||
|
|
bad67b2e69 | ||
|
|
178fb54bb4 | ||
|
|
b0a6f889aa | ||
|
|
798d2462d6 | ||
|
|
c228d45cea | ||
|
|
dfcc375fba | ||
|
|
8ed21a8c07 | ||
|
|
1823a8139b | ||
|
|
2e694a752d | ||
|
|
29aa884836 | ||
|
|
3dd5c4bfcc | ||
|
|
d843fd4443 | ||
|
|
9dda02d430 | ||
|
|
47f23fcc4f | ||
|
|
75ef310e9b | ||
|
|
b78758976e | ||
|
|
6a17edc694 | ||
|
|
e88374e246 | ||
|
|
2c940b3422 | ||
|
|
739cc0e639 | ||
|
|
a7fa58151a | ||
|
|
a6df61e22c | ||
|
|
a981c9eec1 | ||
|
|
c62b9c5848 | ||
|
|
be5931f439 | ||
|
|
b1b6ce3c5c | ||
|
|
25d739fc67 | ||
|
|
f83a909a94 | ||
|
|
4ed1b6e8e6 | ||
|
|
c5610f11e0 | ||
|
|
ddb70ba5d4 | ||
|
|
83aa1a961e | ||
|
|
2d1a9da046 | ||
|
|
599bedf908 | ||
|
|
041e97d741 | ||
|
|
c3dc04c1e5 | ||
|
|
9fb2377e9e | ||
|
|
c663afdce0 | ||
|
|
1d91b17dee | ||
|
|
b66e370672 | ||
|
|
1ee82f37ba | ||
|
|
6831c9e0f4 | ||
|
|
773580e51b | ||
|
|
d3770373d4 | ||
|
|
dfc06d1419 | ||
|
|
9adcd4c5ee | ||
|
|
5ffb6f26e5 | ||
|
|
a7f5eebd26 | ||
|
|
75904848f5 | ||
|
|
874b9b070e | ||
|
|
d58471f713 | ||
|
|
a51d0e72c7 | ||
|
|
94254a14eb | ||
|
|
ddfa69a3ae | ||
|
|
14f40099c3 | ||
|
|
e492ba27a4 | ||
|
|
a478ab69e6 | ||
|
|
8cbfd04db6 | ||
|
|
750fb33e1c | ||
|
|
a20058a884 | ||
|
|
f8eaec091c | ||
|
|
67e89b55a7 | ||
|
|
aee93c0e24 | ||
|
|
3a4235a661 | ||
|
|
2762e8a30d | ||
|
|
e6daa33bca | ||
|
|
9482e7a720 | ||
|
|
8f00a28454 | ||
|
|
e00f26658b | ||
|
|
9943c98055 | ||
|
|
1601c10025 | ||
|
|
3298f79c44 | ||
|
|
6c79c04e9c | ||
|
|
ad9babd349 | ||
|
|
e0ffe8b424 | ||
|
|
db42d7f577 | ||
|
|
786ae9305d | ||
|
|
a823301862 | ||
|
|
de20a2621c | ||
|
|
1874a5e641 | ||
|
|
3653d2efd0 | ||
|
|
f1e9ca2540 | ||
|
|
3390384ce3 | ||
|
|
cb63dd1765 | ||
|
|
ccec89f419 | ||
|
|
7f7b0a328f | ||
|
|
24eaf1e143 | ||
|
|
99981754c9 | ||
|
|
d31af28f08 | ||
|
|
2836996a21 | ||
|
|
db43ab9cf6 | ||
|
|
a94c6a291e | ||
|
|
c6f0614570 | ||
|
|
f64cd9ea28 | ||
|
|
2482289ad6 | ||
|
|
7863ab3b03 | ||
|
|
b0d117c3b8 | ||
|
|
1399e3881a | ||
|
|
2b2ae516da | ||
|
|
2df7d143d3 | ||
|
|
1688d029b9 | ||
|
|
6d8f451be1 | ||
|
|
840e1e50a9 | ||
|
|
164fe504a4 | ||
|
|
9040e49e16 | ||
|
|
629736ad39 | ||
|
|
ebc41ada45 | ||
|
|
4fea786e16 | ||
|
|
0edd20c82c | ||
|
|
296dd0d0df | ||
|
|
f2151cd9e8 | ||
|
|
60876b14ce | ||
|
|
9231ba742c | ||
|
|
8a538102da | ||
|
|
31f901da35 | ||
|
|
c5b731fcb2 | ||
|
|
b2c7945513 | ||
|
|
6bf5c1f535 | ||
|
|
3da50fe83d | ||
|
|
b46bdb9b60 | ||
|
|
00c7bb65e1 | ||
|
|
9946f9affd | ||
|
|
46a11b1cca | ||
|
|
8a003ad805 | ||
|
|
7dd860b2ab | ||
|
|
a9d7ca8809 | ||
|
|
5695f4f3e7 | ||
|
|
1d74398337 | ||
|
|
e8f17346ff | ||
|
|
bb1e00301c | ||
|
|
883786ec91 | ||
|
|
3a023a675f | ||
|
|
8c895864da | ||
|
|
90474a6b92 | ||
|
|
f7cf8a0b1d | ||
|
|
98fdccb58f | ||
|
|
6f606f34d1 | ||
|
|
fd3f1fc929 | ||
|
|
36ea3e62fd | ||
|
|
7c9b37d827 | ||
|
|
3fc0787b84 | ||
|
|
5ba50f6d80 | ||
|
|
c0c685c561 | ||
|
|
64a0fd0446 | ||
|
|
b1b67c268f | ||
|
|
ae12195439 | ||
|
|
3106b39566 | ||
|
|
d88aa9d6eb | ||
|
|
9f9f2ff58c | ||
|
|
ce887968b7 | ||
|
|
40e9fbdb3f | ||
|
|
3227cbbfa4 | ||
|
|
df9a17ed85 | ||
|
|
3ad7b59c84 | ||
|
|
b94da568a9 | ||
|
|
0146ae7c30 | ||
|
|
afbcf5985f | ||
|
|
af879ec84d | ||
|
|
f78c84d9a8 | ||
|
|
2d11d3dd3e | ||
|
|
39c556576c | ||
|
|
73fd094cfc | ||
|
|
cbf2cd0ee8 | ||
|
|
915bb523d6 | ||
|
|
3dc87d2adb | ||
|
|
9155303674 | ||
|
|
0777824d96 | ||
|
|
b170ca3e18 | ||
|
|
5fd30fe3c8 | ||
|
|
2fa181ffbc | ||
|
|
a2bccdfb8e | ||
|
|
f3b159116f | ||
|
|
03b9a9cf0d | ||
|
|
bf2fae6e1a | ||
|
|
086fa54035 | ||
|
|
962bbaa5a1 | ||
|
|
9c71a8ecab | ||
|
|
deff5a4ed0 | ||
|
|
e10c1c9c8d | ||
|
|
b155fe2b06 | ||
|
|
840bfe32d2 | ||
|
|
f33ef9861b | ||
|
|
cbe71cc203 | ||
|
|
beaf8131ae | ||
|
|
99bfbb56de | ||
|
|
e73c8e830a | ||
|
|
1c4e6c7e38 | ||
|
|
1319c3380d | ||
|
|
dce8c34064 | ||
|
|
9502ee0cd0 | ||
|
|
8eb4ffe3ed | ||
|
|
4be59807e5 | ||
|
|
4712a2ff29 | ||
|
|
f9179a1e89 | ||
|
|
a6df204721 | ||
|
|
101189ae7c | ||
|
|
f25c012fbe | ||
|
|
868a85d84c | ||
|
|
771dd4b176 | ||
|
|
ed4d3bf17c | ||
|
|
7c728fbe36 | ||
|
|
4ff9d01ef5 | ||
|
|
1bce2e80e8 | ||
|
|
1556d74406 | ||
|
|
9b3947cc90 | ||
|
|
18b0309ac4 | ||
|
|
0afd4ae115 | ||
|
|
09e360c746 | ||
|
|
5dbe79ba2f | ||
|
|
779707761f | ||
|
|
16992bb2bd | ||
|
|
3374f83255 | ||
|
|
8f420a14cd | ||
|
|
57024c0cb1 | ||
|
|
db7fb825fe | ||
|
|
49e8c6a4f2 | ||
|
|
66bf4241b2 | ||
|
|
76a5dda553 | ||
|
|
6393aa7f2c | ||
|
|
c5f938113f | ||
|
|
dac7eaba6d | ||
|
|
35e6059068 | ||
|
|
afebc8dc39 | ||
|
|
34151a86b1 | ||
|
|
72d6934345 | ||
|
|
f5f7031030 | ||
|
|
ffccca9424 | ||
|
|
3f5ae334a2 | ||
|
|
bb45c4d345 | ||
|
|
bad3c76de9 |
280
.devcontainer/Dockerfile
Executable file
280
.devcontainer/Dockerfile
Executable file
@@ -0,0 +1,280 @@
|
||||
# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh
|
||||
|
||||
# ---/Dockerfile---
|
||||
# The NetAlertX Dockerfile has 3 stages:
|
||||
#
|
||||
# Stage 1. Builder - NetAlertX Requires special tools and packages to build our virtual environment, but
|
||||
# which are not needed in future stages. We build the builder and extract the venv for runner to use as
|
||||
# a base.
|
||||
#
|
||||
# Stage 2. Runner builds the bare minimum requirements to create an operational NetAlertX. The primary
|
||||
# reason for breaking at this stage is it leaves the system in a proper state for devcontainer operation
|
||||
# This image also provides a break-out point for uses who wish to execute the anti-pattern of using a
|
||||
# docker container as a VM for experimentation and various development patterns.
|
||||
#
|
||||
# Stage 3. Hardened removes root, sudoers, folders, permissions, and locks the system down into a read-only
|
||||
# compatible image. While NetAlertX does require some read-write operations, this image can guarantee the
|
||||
# code pushed out by the project is the only code which will run on the system after each container restart.
|
||||
# It reduces the chance of system hijacking and operates with all modern security protocols in place as is
|
||||
# expected from a security appliance.
|
||||
#
|
||||
# This file can be built with `docker-compose -f docker-compose.yml up --build --force-recreate`
|
||||
|
||||
FROM alpine:3.22 AS builder
|
||||
|
||||
ARG INSTALL_DIR=/app
|
||||
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
|
||||
# Install build dependencies
|
||||
COPY requirements.txt /tmp/requirements.txt
|
||||
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \
|
||||
&& python -m venv /opt/venv
|
||||
|
||||
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
|
||||
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
|
||||
# together makes for a slightly smaller image size.
|
||||
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
|
||||
chmod -R u-rwx,g-rwx /opt
|
||||
|
||||
# second stage is the main runtime stage with just the minimum required to run the application
|
||||
# The runner is used for both devcontainer, and as a base for the hardened stage.
|
||||
FROM alpine:3.22 AS runner
|
||||
|
||||
ARG INSTALL_DIR=/app
|
||||
|
||||
# NetAlertX app directories
|
||||
ENV NETALERTX_APP=${INSTALL_DIR}
|
||||
ENV NETALERTX_DATA=/data
|
||||
ENV NETALERTX_CONFIG=${NETALERTX_DATA}/config
|
||||
ENV NETALERTX_FRONT=${NETALERTX_APP}/front
|
||||
ENV NETALERTX_PLUGINS=${NETALERTX_FRONT}/plugins
|
||||
ENV NETALERTX_SERVER=${NETALERTX_APP}/server
|
||||
ENV NETALERTX_API=/tmp/api
|
||||
ENV NETALERTX_DB=${NETALERTX_DATA}/db
|
||||
ENV NETALERTX_DB_FILE=${NETALERTX_DB}/app.db
|
||||
ENV NETALERTX_BACK=${NETALERTX_APP}/back
|
||||
ENV NETALERTX_LOG=/tmp/log
|
||||
ENV NETALERTX_PLUGINS_LOG=${NETALERTX_LOG}/plugins
|
||||
ENV NETALERTX_CONFIG_FILE=${NETALERTX_CONFIG}/app.conf
|
||||
|
||||
# NetAlertX log files
|
||||
ENV LOG_IP_CHANGES=${NETALERTX_LOG}/IP_changes.log
|
||||
ENV LOG_APP=${NETALERTX_LOG}/app.log
|
||||
ENV LOG_APP_FRONT=${NETALERTX_LOG}/app_front.log
|
||||
ENV LOG_REPORT_OUTPUT_TXT=${NETALERTX_LOG}/report_output.txt
|
||||
ENV LOG_DB_IS_LOCKED=${NETALERTX_LOG}/db_is_locked.log
|
||||
ENV LOG_REPORT_OUTPUT_HTML=${NETALERTX_LOG}/report_output.html
|
||||
ENV LOG_STDERR=${NETALERTX_LOG}/stderr.log
|
||||
ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
|
||||
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
|
||||
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
|
||||
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
|
||||
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
|
||||
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
|
||||
|
||||
# System Services configuration files
|
||||
ENV ENTRYPOINT_CHECKS=/entrypoint.d
|
||||
ENV SYSTEM_SERVICES=/services
|
||||
ENV SYSTEM_SERVICES_SCRIPTS=${SYSTEM_SERVICES}/scripts
|
||||
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
|
||||
ENV SYSTEM_NGINX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
|
||||
ENV SYSTEM_NGINX_CONFIG_TEMPLATE=${SYSTEM_NGINX_CONFIG}/netalertx.conf.template
|
||||
ENV SYSTEM_SERVICES_CONFIG_CRON=${SYSTEM_SERVICES_CONFIG}/cron
|
||||
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
|
||||
ENV SYSTEM_SERVICES_ACTIVE_CONFIG_FILE=${SYSTEM_SERVICES_ACTIVE_CONFIG}/nginx.conf
|
||||
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
|
||||
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
|
||||
ENV SYSTEM_SERVICES_RUN=/tmp/run
|
||||
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
|
||||
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
|
||||
ENV PHP_FPM_CONFIG_FILE=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.conf
|
||||
ENV READ_ONLY_FOLDERS="${NETALERTX_BACK} ${NETALERTX_FRONT} ${NETALERTX_SERVER} ${SYSTEM_SERVICES} \
|
||||
${SYSTEM_SERVICES_CONFIG} ${ENTRYPOINT_CHECKS}"
|
||||
ENV READ_WRITE_FOLDERS="${NETALERTX_DATA} ${NETALERTX_CONFIG} ${NETALERTX_DB} ${NETALERTX_API} \
|
||||
${NETALERTX_LOG} ${NETALERTX_PLUGINS_LOG} ${SYSTEM_SERVICES_RUN} \
|
||||
${SYSTEM_SERVICES_RUN_TMP} ${SYSTEM_SERVICES_RUN_LOG} \
|
||||
${SYSTEM_SERVICES_ACTIVE_CONFIG}"
|
||||
|
||||
#Python environment
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV VIRTUAL_ENV=/opt/venv
|
||||
ENV VIRTUAL_ENV_BIN=/opt/venv/bin
|
||||
ENV PYTHONPATH=${NETALERTX_APP}:${NETALERTX_SERVER}:${NETALERTX_PLUGINS}:${VIRTUAL_ENV}/lib/python3.12/site-packages
|
||||
ENV PATH="${SYSTEM_SERVICES}:${VIRTUAL_ENV_BIN}:$PATH"
|
||||
|
||||
# App Environment
|
||||
ENV LISTEN_ADDR=0.0.0.0
|
||||
ENV PORT=20211
|
||||
ENV NETALERTX_DEBUG=0
|
||||
ENV VENDORSPATH=/app/back/ieee-oui.txt
|
||||
ENV VENDORSPATH_NEWEST=${SYSTEM_SERVICES_RUN_TMP}/ieee-oui.txt
|
||||
ENV ENVIRONMENT=alpine
|
||||
ENV READ_ONLY_USER=readonly READ_ONLY_GROUP=readonly
|
||||
ENV NETALERTX_USER=netalertx NETALERTX_GROUP=netalertx
|
||||
ENV LANG=C.UTF-8
|
||||
|
||||
|
||||
RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \
|
||||
nmap-scripts traceroute nbtscan net-tools net-snmp-tools bind-tools awake ca-certificates \
|
||||
sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session python3 envsubst \
|
||||
nginx supercronic shadow && \
|
||||
rm -Rf /var/cache/apk/* && \
|
||||
rm -Rf /etc/nginx && \
|
||||
addgroup -g 20211 ${NETALERTX_GROUP} && \
|
||||
adduser -u 20211 -D -h ${NETALERTX_APP} -G ${NETALERTX_GROUP} ${NETALERTX_USER} && \
|
||||
apk del shadow
|
||||
|
||||
|
||||
|
||||
# Install application, copy files, set permissions
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} install/production-filesystem/ /
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 back ${NETALERTX_BACK}
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 front ${NETALERTX_FRONT}
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 server ${NETALERTX_SERVER}
|
||||
|
||||
# Create required folders with correct ownership and permissions
|
||||
RUN install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
|
||||
sh -c "find ${NETALERTX_APP} -type f \( -name '*.sh' -o -name 'speedtest-cli' \) \
|
||||
-exec chmod 750 {} \;"
|
||||
|
||||
# Copy version information into the image
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION
|
||||
|
||||
# Copy the virtualenv from the builder stage
|
||||
COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
|
||||
|
||||
|
||||
# Initialize each service with the dockerfiles/init-*.sh scripts, once.
|
||||
# This is done after the copy of the venv to ensure the venv is in place
|
||||
# although it may be quicker to do it before the copy, it keeps the image
|
||||
# layers smaller to do it after.
|
||||
RUN if [ -f '.VERSION' ]; then \
|
||||
cp '.VERSION' "${NETALERTX_APP}/.VERSION"; \
|
||||
else \
|
||||
echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/.VERSION"; \
|
||||
fi && \
|
||||
chown 20212:20212 "${NETALERTX_APP}/.VERSION" && \
|
||||
apk add --no-cache libcap && \
|
||||
setcap cap_net_raw+ep /bin/busybox && \
|
||||
setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \
|
||||
setcap cap_net_raw,cap_net_admin+eip /usr/bin/arp-scan && \
|
||||
setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip /usr/bin/nbtscan && \
|
||||
setcap cap_net_raw,cap_net_admin+eip /usr/bin/traceroute && \
|
||||
setcap cap_net_raw,cap_net_admin+eip "$(readlink -f ${VIRTUAL_ENV_BIN}/python)" && \
|
||||
/bin/sh /build/init-nginx.sh && \
|
||||
/bin/sh /build/init-php-fpm.sh && \
|
||||
/bin/sh /build/init-cron.sh && \
|
||||
/bin/sh /build/init-backend.sh && \
|
||||
rm -rf /build && \
|
||||
apk del libcap && \
|
||||
date +%s > "${NETALERTX_FRONT}/buildtimestamp.txt"
|
||||
|
||||
|
||||
ENTRYPOINT ["/bin/sh","/entrypoint.sh"]
|
||||
|
||||
# Final hardened stage to improve security by setting least possible permissions and removing sudo access.
|
||||
# When complete, if the image is compromised, there's not much that can be done with it.
|
||||
# This stage is separate from Runner stage so that devcontainer can use the Runner stage.
|
||||
FROM runner AS hardened
|
||||
|
||||
ENV UMASK=0077
|
||||
|
||||
# Create readonly user and group with no shell access.
|
||||
# Readonly user marks folders that are created by NetAlertX, but should not be modified.
|
||||
# AI may claim this is stupid, but it's actually least possible permissions as
|
||||
# read-only user cannot login, cannot sudo, has no write permission, and cannot even
|
||||
# read the files it owns. The read-only user is ownership-as-a-lock hardening pattern.
|
||||
RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \
|
||||
adduser -u 20212 -G "${READ_ONLY_GROUP}" -D -h /app "${READ_ONLY_USER}"
|
||||
|
||||
|
||||
# reduce permissions to minimum necessary for all NetAlertX files and folders
|
||||
# Permissions 005 and 004 are not typos, they enable read-only. Everyone can
|
||||
# read the read-only files, and nobody can write to them, even the readonly user.
|
||||
|
||||
# hadolint ignore=SC2114
|
||||
RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
|
||||
chmod -R 004 ${READ_ONLY_FOLDERS} && \
|
||||
find ${READ_ONLY_FOLDERS} -type d -exec chmod 005 {} + && \
|
||||
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
|
||||
chown -R ${NETALERTX_USER}:${NETALERTX_GROUP} ${READ_WRITE_FOLDERS} && \
|
||||
chmod -R 600 ${READ_WRITE_FOLDERS} && \
|
||||
find ${READ_WRITE_FOLDERS} -type d -exec chmod 700 {} + && \
|
||||
chown ${READ_ONLY_USER}:${READ_ONLY_GROUP} /entrypoint.sh /opt /opt/venv && \
|
||||
chmod 005 /entrypoint.sh ${SYSTEM_SERVICES}/*.sh ${SYSTEM_SERVICES_SCRIPTS}/* ${ENTRYPOINT_CHECKS}/* /app /opt /opt/venv && \
|
||||
for dir in ${READ_WRITE_FOLDERS}; do \
|
||||
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 "$dir"; \
|
||||
done && \
|
||||
apk del apk-tools && \
|
||||
rm -Rf /var /etc/sudoers.d/* /etc/shadow /etc/gshadow /etc/sudoers \
|
||||
/lib/apk /lib/firmware /lib/modules-load.d /lib/sysctl.d /mnt /home/ /root \
|
||||
/srv /media && \
|
||||
sed -i "/^\(${READ_ONLY_USER}\|${NETALERTX_USER}\):/!d" /etc/passwd && \
|
||||
sed -i "/^\(${READ_ONLY_GROUP}\|${NETALERTX_GROUP}\):/!d" /etc/group && \
|
||||
printf '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
|
||||
|
||||
USER netalertx
|
||||
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD /services/healthcheck.sh
|
||||
|
||||
|
||||
# ---/resources/devcontainer-Dockerfile---
|
||||
|
||||
# Devcontainer build stage (do not build directly)
|
||||
# This file is combined with the root /Dockerfile by
|
||||
# .devcontainer/scripts/generate-configs.sh
|
||||
# The generator appends this stage to produce .devcontainer/Dockerfile.
|
||||
# Prefer to place dev-only setup here; use setup.sh only for runtime fixes.
|
||||
# Permissions in devcontainer should be of a brutalist nature. They will be
|
||||
# Open and wide to avoid permission issues during development allowing max
|
||||
# flexibility.
|
||||
|
||||
# hadolint ignore=DL3006
|
||||
FROM runner AS netalertx-devcontainer
|
||||
ENV INSTALL_DIR=/app
|
||||
|
||||
ENV PYTHONPATH=${PYTHONPATH}:/workspaces/NetAlertX/test:/workspaces/NetAlertX/server:/usr/lib/python3.12/site-packages
|
||||
ENV PATH=/services:${PATH}
|
||||
ENV PHP_INI_SCAN_DIR=/services/config/php/conf.d:/etc/php83/conf.d
|
||||
ENV LISTEN_ADDR=0.0.0.0
|
||||
ENV PORT=20211
|
||||
ENV NETALERTX_DEBUG=1
|
||||
ENV PYDEVD_DISABLE_FILE_VALIDATION=1
|
||||
COPY .devcontainer/resources/devcontainer-overlay/ /
|
||||
USER root
|
||||
# Install common tools, create user, and set up sudo
|
||||
|
||||
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest \
|
||||
pytest-cov zsh alpine-zsh-config shfmt github-cli py3-yaml py3-docker-py docker-cli docker-cli-buildx \
|
||||
docker-cli-compose shellcheck
|
||||
|
||||
# Install hadolint (Dockerfile linter)
|
||||
RUN curl -L https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64 -o /usr/local/bin/hadolint && \
|
||||
chmod +x /usr/local/bin/hadolint
|
||||
|
||||
RUN install -d -o netalertx -g netalertx -m 755 /services/php/modules && \
|
||||
cp -a /usr/lib/php83/modules/. /services/php/modules/ && \
|
||||
echo "${NETALERTX_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||
ENV SHELL=/bin/zsh
|
||||
|
||||
RUN mkdir -p /workspaces && \
|
||||
install -d -m 777 /data /data/config /data/db && \
|
||||
install -d -m 777 /tmp/log /tmp/log/plugins /tmp/api /tmp/run /tmp/nginx && \
|
||||
install -d -m 777 /tmp/nginx/active-config /tmp/nginx/client_body /tmp/nginx/config && \
|
||||
install -d -m 777 /tmp/nginx/fastcgi /tmp/nginx/proxy /tmp/nginx/scgi /tmp/nginx/uwsgi && \
|
||||
install -d -m 777 /tmp/run/tmp /tmp/run/logs && \
|
||||
chmod 777 /workspaces && \
|
||||
chown -R netalertx:netalertx /data && \
|
||||
chmod 666 /data/config/app.conf /data/db/app.db && \
|
||||
chmod 1777 /tmp && \
|
||||
install -d -o root -g root -m 1777 /tmp/.X11-unix && \
|
||||
mkdir -p /home/netalertx && \
|
||||
chown netalertx:netalertx /home/netalertx && \
|
||||
sed -i -e 's#/app:#/workspaces:#' /etc/passwd && \
|
||||
find /opt/venv -type d -exec chmod o+rwx {} \;
|
||||
|
||||
USER netalertx
|
||||
ENTRYPOINT ["/bin/sh","-c","sleep infinity"]
|
||||
37
.devcontainer/NetAlertX.code-workspace
Normal file
37
.devcontainer/NetAlertX.code-workspace
Normal file
@@ -0,0 +1,37 @@
|
||||
{
|
||||
"folders": [
|
||||
{
|
||||
"name": "NetAlertX Source",
|
||||
"path": "/workspaces/NetAlertX"
|
||||
},
|
||||
{
|
||||
"name": "💾 NetAlertX Data",
|
||||
"path": "/data"
|
||||
},
|
||||
{
|
||||
"name": "🔍 Active NetAlertX log",
|
||||
"path": "/tmp/log"
|
||||
},
|
||||
{
|
||||
"name": "🌐 Active NetAlertX nginx",
|
||||
"path": "/tmp/nginx"
|
||||
},
|
||||
{
|
||||
"name": "📊 Active NetAlertX api",
|
||||
"path": "/tmp/api"
|
||||
},
|
||||
{
|
||||
"name": "⚙️ Active NetAlertX run",
|
||||
"path": "/tmp/run"
|
||||
}
|
||||
],
|
||||
"settings": {
|
||||
"terminal.integrated.suggest.enabled": true,
|
||||
"terminal.integrated.defaultProfile.linux": "zsh",
|
||||
"terminal.integrated.profiles.linux": {
|
||||
"zsh": {
|
||||
"path": "/usr/bin/fish"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
41
.devcontainer/README.md
Executable file
41
.devcontainer/README.md
Executable file
@@ -0,0 +1,41 @@
|
||||
# NetAlertX Devcontainer Notes
|
||||
|
||||
This devcontainer replicates the production container as closely as practical, with a few development-oriented differences.
|
||||
|
||||
Key behavior
|
||||
- No init process: Services are managed by shell scripts using killall, setsid, and nohup. Startup and restarts are script-driven rather than supervised by an init system.
|
||||
- Autogenerated Dockerfile: The effective devcontainer Dockerfile is generated on demand by `.devcontainer/scripts/generate-dockerfile.sh`. It combines the root `Dockerfile` (with certain COPY instructions removed) and an extra "devcontainer" stage from `.devcontainer/resources/devcontainer-Dockerfile`. When you change the resource Dockerfile, re-run the generator to refresh `.devcontainer/Dockerfile`.
|
||||
- Where to put setup: Prefer baking setup into `.devcontainer/resources/devcontainer-Dockerfile`. Use `.devcontainer/scripts/setup.sh` only for steps that must happen at container start (e.g., cleaning up nginx/php ownership, creating directories, touching runtime files) or depend on runtime paths.
|
||||
|
||||
Debugging (F5)
|
||||
The Frontend and backend run in debug mode always. You can attach your debugger at any time.
|
||||
- Python Backend Debug: Attach - The backend runs with a debugger on port 5678. Set breakpoints in the code and press F5 to begin triggering them.
|
||||
- PHP Frontend (XDebug) Xdebug listens on 9003. Start listening and use an Xdebug extension in your browser to debug PHP.
|
||||
|
||||
Common workflows (F1->Tasks: Run Task)
|
||||
- Regenerate the devcontainer Dockerfile: Run the VS Code task "Generate Dockerfile" or execute `.devcontainer/scripts/generate-dockerfile.sh`. The result is `.devcontainer/Dockerfile`.
|
||||
- Re-run startup provisioning: Use the task "Re-Run Startup Script" to execute `.devcontainer/scripts/setup.sh` in the container.
|
||||
- Start services:
|
||||
- Backend (GraphQL/Flask): `.devcontainer/scripts/restart-backend.sh` starts it under debugpy and logs to `/app/log/app.log`
|
||||
- Frontend (nginx + PHP-FPM): Started via setup.sh; can be restarted by the task "Start Frontend (nginx and PHP-FPM)".
|
||||
|
||||
Production Container Evaulation
|
||||
1. F1 → Tasks: Shutdown services ([Dev Container] Stop Frontend & Backend Services)
|
||||
2. F1 → Tasks: Docker system and build prune ([Any] Docker system and build Prune)
|
||||
3. F1 → Remote: Close Unused Forwarded Ports (VS Code command)
|
||||
4. F1 → Tasks: Build & Launch Production (Build & Launch Prodcution Docker
|
||||
5. visit http://localhost:20211
|
||||
|
||||
Unit tests
|
||||
1. F1 → Tasks: Rebuild test container ([Any] Build Unit Test Docker image)
|
||||
2. F1 → Test: Run all tests
|
||||
|
||||
Testing
|
||||
- pytest is installed via Alpine packages (py3-pytest, py3-pytest-cov).
|
||||
- PYTHONPATH includes workspace and venv site-packages so tests can import `server/*` modules and third-party libs.
|
||||
- Run tests via VS Code Pytest Runner or `pytest -q` from the workspace root.
|
||||
|
||||
Conventions
|
||||
- Don’t edit `.devcontainer/Dockerfile` directly; edit `.devcontainer/resources/devcontainer-Dockerfile` and regenerate.
|
||||
- Keep setup in the resource Dockerfile when possible; reserve `setup.sh` for runtime fixes.
|
||||
- Avoid hardcoding ports/secrets; prefer existing settings and helpers (see `.github/copilot-instructions.md`).
|
||||
26
.devcontainer/WORKSPACE.md
Normal file
26
.devcontainer/WORKSPACE.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# NetAlertX Multi-Folder Workspace
|
||||
|
||||
This repository uses a multi-folder workspace configuration to provide easy access to runtime directories.
|
||||
|
||||
## Opening the Multi-Folder Workspace
|
||||
|
||||
After the devcontainer builds, open the workspace file to access all folders:
|
||||
|
||||
1. **File** → **Open Workspace from File**
|
||||
2. Select `NetAlertX.code-workspace`
|
||||
|
||||
Or use Command Palette (Ctrl+Shift+P / Cmd+Shift+P):
|
||||
- Type: `Workspaces: Open Workspace from File`
|
||||
- Select `NetAlertX.code-workspace`
|
||||
|
||||
## Workspace Folders
|
||||
|
||||
The workspace includes:
|
||||
- **NetAlertX** - Main source code
|
||||
- **/tmp** - Runtime temporary files
|
||||
- **/tmp/api** - API response cache (JSON files)
|
||||
- **/tmp/log** - Application and plugin logs
|
||||
|
||||
## Testing Configuration
|
||||
|
||||
Pytest is configured to only discover tests in the main `test/` directory, not in `/tmp` folders.
|
||||
109
.devcontainer/devcontainer.json
Executable file
109
.devcontainer/devcontainer.json
Executable file
@@ -0,0 +1,109 @@
|
||||
{
|
||||
"name": "NetAlertX DevContainer",
|
||||
"remoteUser": "netalertx",
|
||||
"workspaceFolder": "/workspaces/NetAlertX",
|
||||
"workspaceMount": "source=${localWorkspaceFolder},target=/workspaces/NetAlertX,type=bind,consistency=cached",
|
||||
"onCreateCommand": "mkdir -p /tmp/api /tmp/log",
|
||||
"build": {
|
||||
"dockerfile": "./Dockerfile", // Dockerfile generated by script
|
||||
"context": "../", // Context is the root of the repository
|
||||
"target": "netalertx-devcontainer"
|
||||
},
|
||||
"capAdd": [
|
||||
"SYS_ADMIN", // For mounting ramdisks
|
||||
"NET_ADMIN", // For network interface configuration
|
||||
"NET_RAW" // For raw packet manipulation
|
||||
],
|
||||
"runArgs": [
|
||||
"--security-opt",
|
||||
"apparmor=unconfined", // for allowing ramdisk mounts
|
||||
"--add-host=host.docker.internal:host-gateway"
|
||||
|
||||
// Uncomment --network=host to run full NetAlertX scanning capabilities of network scanning in
|
||||
// container. This runs too slowly in a large network to be practical for development purposes.
|
||||
// You can start services such as avahi on the host, in other containers within the network, or
|
||||
// even within this container and connect to them as needed.
|
||||
// "--network=host",
|
||||
],
|
||||
"mounts": [
|
||||
"source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" //used for testing various conditions in docker
|
||||
],
|
||||
// ATTENTION: If running with --network=host, COMMENT `forwardPorts` OR ELSE THERE WILL BE NO WEBUI!
|
||||
"forwardPorts": [20211, 20212, 5678],
|
||||
"portsAttributes": { // the ports we care about
|
||||
"20211": {
|
||||
"label": "Frontend:Nginx+PHP"
|
||||
},
|
||||
"20212": {
|
||||
"label": "Backend:GraphQL"
|
||||
},
|
||||
"9003": {
|
||||
"label": "PHP Debug:Xdebug"
|
||||
},
|
||||
"5678": {
|
||||
"label": "Python Debug:debugpy"
|
||||
}
|
||||
},
|
||||
|
||||
"postCreateCommand": {
|
||||
"Install Pip Requirements": "/opt/venv/bin/pip3 install pytest docker debugpy",
|
||||
"Workspace Instructions": "printf '\n\n<> DevContainer Ready!\n\n📁 To access /tmp folders in the workspace:\n File → Open Workspace from File → NetAlertX.code-workspace\n\n📖 See .devcontainer/WORKSPACE.md for details\n\n'"
|
||||
},
|
||||
"postStartCommand": {
|
||||
"Start Environment":"${containerWorkspaceFolder}/.devcontainer/scripts/setup.sh",
|
||||
"Build test-container":"echo building netalertx-test container in background. check /tmp/build.log for progress. && setsid docker buildx build -t netalertx-test . > /tmp/build.log 2>&1 &"
|
||||
},
|
||||
"customizations": {
|
||||
"vscode": {
|
||||
"extensions": [
|
||||
"ms-python.python",
|
||||
"ms-azuretools.vscode-docker",
|
||||
"felixfbecker.php-debug",
|
||||
"bmewburn.vscode-intelephense-client",
|
||||
"xdebug.php-debug",
|
||||
"ms-python.vscode-pylance",
|
||||
"pamaron.pytest-runner",
|
||||
"coderabbit.coderabbit-vscode",
|
||||
"ms-python.black-formatter",
|
||||
"jeff-hykin.better-dockerfile-syntax",
|
||||
"GitHub.codespaces",
|
||||
"ms-azuretools.vscode-containers",
|
||||
"ms-python.vscode-python-envs",
|
||||
"dbaeumer.vscode-eslint",
|
||||
"esbenp.prettier-vscode",
|
||||
"eamodio.gitlens",
|
||||
"alexcvzz.vscode-sqlite",
|
||||
"mkhl.shfmt",
|
||||
"charliermarsh.ruff",
|
||||
"ms-python.flake8",
|
||||
"exiasr.hadolint",
|
||||
"timonwong.shellcheck"
|
||||
],
|
||||
"settings": {
|
||||
"terminal.integrated.cwd": "${containerWorkspaceFolder}",
|
||||
"terminal.integrated.profiles.linux": {
|
||||
"zsh": {
|
||||
"path": "/bin/zsh",
|
||||
"args": ["-l"]
|
||||
}
|
||||
},
|
||||
"terminal.integrated.defaultProfile.linux": "zsh",
|
||||
|
||||
// Python testing configuration
|
||||
"python.testing.pytestEnabled": true,
|
||||
"python.testing.unittestEnabled": false,
|
||||
"python.testing.pytestArgs": ["test"],
|
||||
"python.testing.cwd": "${containerWorkspaceFolder}",
|
||||
// Make sure we discover tests and import server correctly
|
||||
"python.analysis.extraPaths": [
|
||||
"/workspaces/NetAlertX",
|
||||
"/workspaces/NetAlertX/server",
|
||||
"/app",
|
||||
"/app/server"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"shutdownAction": "stopContainer" // stop container when VSCode is closed
|
||||
}
|
||||
55
.devcontainer/resources/devcontainer-Dockerfile
Executable file
55
.devcontainer/resources/devcontainer-Dockerfile
Executable file
@@ -0,0 +1,55 @@
|
||||
# Devcontainer build stage (do not build directly)
|
||||
# This file is combined with the root /Dockerfile by
|
||||
# .devcontainer/scripts/generate-configs.sh
|
||||
# The generator appends this stage to produce .devcontainer/Dockerfile.
|
||||
# Prefer to place dev-only setup here; use setup.sh only for runtime fixes.
|
||||
# Permissions in devcontainer should be of a brutalist nature. They will be
|
||||
# Open and wide to avoid permission issues during development allowing max
|
||||
# flexibility.
|
||||
|
||||
# hadolint ignore=DL3006
|
||||
FROM runner AS netalertx-devcontainer
|
||||
ENV INSTALL_DIR=/app
|
||||
|
||||
ENV PYTHONPATH=${PYTHONPATH}:/workspaces/NetAlertX/test:/workspaces/NetAlertX/server:/usr/lib/python3.12/site-packages
|
||||
ENV PATH=/services:${PATH}
|
||||
ENV PHP_INI_SCAN_DIR=/services/config/php/conf.d:/etc/php83/conf.d
|
||||
ENV LISTEN_ADDR=0.0.0.0
|
||||
ENV PORT=20211
|
||||
ENV NETALERTX_DEBUG=1
|
||||
ENV PYDEVD_DISABLE_FILE_VALIDATION=1
|
||||
COPY .devcontainer/resources/devcontainer-overlay/ /
|
||||
USER root
|
||||
# Install common tools, create user, and set up sudo
|
||||
|
||||
RUN apk add --no-cache git nano vim jq php83-pecl-xdebug py3-pip nodejs sudo gpgconf pytest \
|
||||
pytest-cov zsh alpine-zsh-config shfmt github-cli py3-yaml py3-docker-py docker-cli docker-cli-buildx \
|
||||
docker-cli-compose shellcheck
|
||||
|
||||
# Install hadolint (Dockerfile linter)
|
||||
RUN curl -L https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64 -o /usr/local/bin/hadolint && \
|
||||
chmod +x /usr/local/bin/hadolint
|
||||
|
||||
RUN install -d -o netalertx -g netalertx -m 755 /services/php/modules && \
|
||||
cp -a /usr/lib/php83/modules/. /services/php/modules/ && \
|
||||
echo "${NETALERTX_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||
ENV SHELL=/bin/zsh
|
||||
|
||||
RUN mkdir -p /workspaces && \
|
||||
install -d -m 777 /data /data/config /data/db && \
|
||||
install -d -m 777 /tmp/log /tmp/log/plugins /tmp/api /tmp/run /tmp/nginx && \
|
||||
install -d -m 777 /tmp/nginx/active-config /tmp/nginx/client_body /tmp/nginx/config && \
|
||||
install -d -m 777 /tmp/nginx/fastcgi /tmp/nginx/proxy /tmp/nginx/scgi /tmp/nginx/uwsgi && \
|
||||
install -d -m 777 /tmp/run/tmp /tmp/run/logs && \
|
||||
chmod 777 /workspaces && \
|
||||
chown -R netalertx:netalertx /data && \
|
||||
chmod 666 /data/config/app.conf /data/db/app.db && \
|
||||
chmod 1777 /tmp && \
|
||||
install -d -o root -g root -m 1777 /tmp/.X11-unix && \
|
||||
mkdir -p /home/netalertx && \
|
||||
chown netalertx:netalertx /home/netalertx && \
|
||||
sed -i -e 's#/app:#/workspaces:#' /etc/passwd && \
|
||||
find /opt/venv -type d -exec chmod o+rwx {} \;
|
||||
|
||||
USER netalertx
|
||||
ENTRYPOINT ["/bin/sh","-c","sleep infinity"]
|
||||
@@ -0,0 +1,11 @@
|
||||
zend_extension="/services/php/modules/xdebug.so"
|
||||
extension_dir="/services/php/modules"
|
||||
|
||||
[xdebug]
|
||||
xdebug.mode=develop,debug
|
||||
xdebug.log=/app/log/xdebug.log
|
||||
xdebug.log_level=7
|
||||
xdebug.client_host=127.0.0.1
|
||||
xdebug.client_port=9003
|
||||
xdebug.start_with_request=yes
|
||||
xdebug.discover_client_host=0
|
||||
@@ -0,0 +1 @@
|
||||
-m debugpy --listen 0.0.0.0:5678
|
||||
@@ -0,0 +1,47 @@
|
||||
# NetAlertX devcontainer zsh configuration
|
||||
# Keep this lightweight and deterministic so shells behave consistently.
|
||||
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
export EDITOR=vim
|
||||
export SHELL=/bin/zsh
|
||||
|
||||
# Start inside the workspace if it exists
|
||||
if [ -d "/workspaces/NetAlertX" ]; then
|
||||
cd /workspaces/NetAlertX
|
||||
fi
|
||||
|
||||
# Enable basic completion and prompt helpers
|
||||
autoload -Uz compinit promptinit colors
|
||||
colors
|
||||
compinit -u
|
||||
promptinit
|
||||
|
||||
# Friendly prompt with virtualenv awareness
|
||||
setopt PROMPT_SUBST
|
||||
|
||||
_venv_segment() {
|
||||
if [ -n "$VIRTUAL_ENV" ]; then
|
||||
printf '(%s) ' "${VIRTUAL_ENV:t}"
|
||||
fi
|
||||
}
|
||||
|
||||
PROMPT='%F{green}$(_venv_segment)%f%F{cyan}%n@%m%f %F{yellow}%~%f %# '
|
||||
RPROMPT='%F{magenta}$(git rev-parse --abbrev-ref HEAD 2>/dev/null)%f'
|
||||
|
||||
# Sensible defaults
|
||||
setopt autocd
|
||||
setopt correct
|
||||
setopt extendedglob
|
||||
HISTFILE="$HOME/.zsh_history"
|
||||
HISTSIZE=5000
|
||||
SAVEHIST=5000
|
||||
|
||||
alias ll='ls -alF'
|
||||
alias la='ls -A'
|
||||
alias gs='git status -sb'
|
||||
alias gp='git pull --ff-only'
|
||||
|
||||
# Ensure pyenv/virtualenv activate hooks adjust the prompt cleanly
|
||||
if [ -f "$HOME/.zshrc.local" ]; then
|
||||
source "$HOME/.zshrc.local"
|
||||
fi
|
||||
18
.devcontainer/scripts/confirm-docker-prune.sh
Executable file
18
.devcontainer/scripts/confirm-docker-prune.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -n "${CONFIRM_PRUNE:-}" && "${CONFIRM_PRUNE}" == "YES" ]]; then
|
||||
reply="YES"
|
||||
else
|
||||
read -r -p "Are you sure you want to destroy your host docker containers and images? Type YES to continue: " reply
|
||||
fi
|
||||
|
||||
if [[ "${reply}" == "YES" ]]; then
|
||||
docker system prune -af
|
||||
docker builder prune -af
|
||||
else
|
||||
echo "Aborted."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Done."
|
||||
34
.devcontainer/scripts/generate-configs.sh
Executable file
34
.devcontainer/scripts/generate-configs.sh
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Generator for .devcontainer/Dockerfile
|
||||
# Combines the root /Dockerfile (with some COPY lines removed) and
|
||||
# the dev-only stage in .devcontainer/resources/devcontainer-Dockerfile.
|
||||
# Run this script after modifying the resource Dockerfile to refresh
|
||||
# the final .devcontainer/Dockerfile used by the devcontainer.
|
||||
|
||||
echo "Generating .devcontainer/Dockerfile"
|
||||
SCRIPT_PATH=$(set -- "$0"; dirname -- "$1")
|
||||
SCRIPT_DIR=$(cd "$SCRIPT_PATH" && pwd -P)
|
||||
DEVCONTAINER_DIR="${SCRIPT_DIR%/scripts}"
|
||||
ROOT_DIR="${DEVCONTAINER_DIR%/.devcontainer}"
|
||||
|
||||
OUT_FILE="${DEVCONTAINER_DIR}/Dockerfile"
|
||||
|
||||
echo "Adding base Dockerfile from $ROOT_DIR and merging to devcontainer-Dockerfile"
|
||||
{
|
||||
|
||||
echo "# DO NOT MODIFY THIS FILE DIRECTLY. IT IS AUTO-GENERATED BY .devcontainer/scripts/generate-configs.sh"
|
||||
echo ""
|
||||
echo "# ---/Dockerfile---"
|
||||
|
||||
cat "${ROOT_DIR}/Dockerfile"
|
||||
|
||||
echo ""
|
||||
echo "# ---/resources/devcontainer-Dockerfile---"
|
||||
echo ""
|
||||
cat "${DEVCONTAINER_DIR}/resources/devcontainer-Dockerfile"
|
||||
} > "$OUT_FILE"
|
||||
|
||||
echo "Generated $OUT_FILE using root dir $ROOT_DIR"
|
||||
|
||||
echo "Done."
|
||||
8
.devcontainer/scripts/isDevContainer.sh
Executable file
8
.devcontainer/scripts/isDevContainer.sh
Executable file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
if [ ! -d /workspaces/NetAlertX/.devcontainer ]; then
|
||||
echo ---------------------------------------------------
|
||||
echo "This script may only be run inside a devcontainer."
|
||||
echo "Not in a devcontainer, exiting..."
|
||||
echo ---------------------------------------------------
|
||||
exit 255
|
||||
fi
|
||||
13
.devcontainer/scripts/run-tests.sh
Executable file
13
.devcontainer/scripts/run-tests.sh
Executable file
@@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
# shellcheck shell=sh
|
||||
# Simple helper to run pytest inside the devcontainer with correct paths
|
||||
set -eu
|
||||
|
||||
# Ensure we run from the workspace root
|
||||
cd /workspaces/NetAlertX
|
||||
|
||||
# Make sure PYTHONPATH includes server and workspace
|
||||
export PYTHONPATH="/workspaces/NetAlertX:/workspaces/NetAlertX/server:/app:/app/server:${PYTHONPATH:-}"
|
||||
|
||||
# Default to running the full test suite under /workspaces/NetAlertX/test
|
||||
pytest -q --maxfail=1 --disable-warnings test "$@"
|
||||
104
.devcontainer/scripts/setup.sh
Executable file
104
.devcontainer/scripts/setup.sh
Executable file
@@ -0,0 +1,104 @@
|
||||
#!/bin/bash
|
||||
# NetAlertX Devcontainer Setup Script
|
||||
#
|
||||
# This script forcefully resets all runtime state for a single-user devcontainer.
|
||||
# It is intentionally idempotent: every run wipes and recreates all relevant folders,
|
||||
# symlinks, and files, so the environment is always fresh and predictable.
|
||||
#
|
||||
# - No conditional logic: everything is (re)created, overwritten, or reset unconditionally.
|
||||
# - No security hardening: this is for disposable, local dev use only.
|
||||
# - No checks for existing files, mounts, or processes—just do the work.
|
||||
#
|
||||
# If you add new runtime files or folders, add them to the creation/reset section below.
|
||||
#
|
||||
# Do not add if-then logic or error handling for missing/existing files. Simplicity is the goal.
|
||||
|
||||
|
||||
SOURCE_DIR=${SOURCE_DIR:-/workspaces/NetAlertX}
|
||||
PY_SITE_PACKAGES="${VIRTUAL_ENV:-/opt/venv}/lib/python3.12/site-packages"
|
||||
|
||||
LOG_FILES=(
|
||||
LOG_APP
|
||||
LOG_APP_FRONT
|
||||
LOG_STDOUT
|
||||
LOG_STDERR
|
||||
LOG_EXECUTION_QUEUE
|
||||
LOG_APP_PHP_ERRORS
|
||||
LOG_IP_CHANGES
|
||||
LOG_CRON
|
||||
LOG_REPORT_OUTPUT_TXT
|
||||
LOG_REPORT_OUTPUT_HTML
|
||||
LOG_REPORT_OUTPUT_JSON
|
||||
LOG_DB_IS_LOCKED
|
||||
LOG_NGINX_ERROR
|
||||
)
|
||||
|
||||
sudo chmod 666 /var/run/docker.sock 2>/dev/null || true
|
||||
sudo chown "$(id -u)":"$(id -g)" /workspaces
|
||||
sudo chmod 755 /workspaces
|
||||
|
||||
killall php-fpm83 nginx crond python3 2>/dev/null || true
|
||||
|
||||
# Mount ramdisks for volatile data
|
||||
sudo mount -t tmpfs -o size=100m,mode=0777 tmpfs /tmp/log 2>/dev/null || true
|
||||
sudo mount -t tmpfs -o size=50m,mode=0777 tmpfs /tmp/api 2>/dev/null || true
|
||||
sudo mount -t tmpfs -o size=50m,mode=0777 tmpfs /tmp/run 2>/dev/null || true
|
||||
sudo mount -t tmpfs -o size=50m,mode=0777 tmpfs /tmp/nginx 2>/dev/null || true
|
||||
|
||||
sudo chmod 777 /tmp/log /tmp/api /tmp/run /tmp/nginx
|
||||
|
||||
|
||||
|
||||
sudo rm -rf /entrypoint.d
|
||||
sudo ln -s "${SOURCE_DIR}/install/production-filesystem/entrypoint.d" /entrypoint.d
|
||||
|
||||
sudo rm -rf "${NETALERTX_APP}"
|
||||
sudo ln -s "${SOURCE_DIR}/" "${NETALERTX_APP}"
|
||||
|
||||
for dir in "${NETALERTX_DATA}" "${NETALERTX_CONFIG}" "${NETALERTX_DB}"; do
|
||||
sudo install -d -m 777 "${dir}"
|
||||
done
|
||||
|
||||
for dir in \
|
||||
"${SYSTEM_SERVICES_RUN_LOG}" \
|
||||
"${SYSTEM_SERVICES_ACTIVE_CONFIG}" \
|
||||
"${NETALERTX_PLUGINS_LOG}" \
|
||||
"${SYSTEM_SERVICES_RUN_TMP}" \
|
||||
"/tmp/nginx/client_body" \
|
||||
"/tmp/nginx/proxy" \
|
||||
"/tmp/nginx/fastcgi" \
|
||||
"/tmp/nginx/uwsgi" \
|
||||
"/tmp/nginx/scgi"; do
|
||||
sudo install -d -m 777 "${dir}"
|
||||
done
|
||||
|
||||
|
||||
for var in "${LOG_FILES[@]}"; do
|
||||
path=${!var}
|
||||
dir=$(dirname "${path}")
|
||||
sudo install -d -m 777 "${dir}"
|
||||
touch "${path}"
|
||||
done
|
||||
|
||||
printf '0\n' | sudo tee "${LOG_DB_IS_LOCKED}" >/dev/null
|
||||
sudo chmod 777 "${LOG_DB_IS_LOCKED}"
|
||||
|
||||
sudo pkill -f python3 2>/dev/null || true
|
||||
|
||||
sudo chmod 777 "${PY_SITE_PACKAGES}" "${NETALERTX_DATA}" "${NETALERTX_DATA}"/* 2>/dev/null || true
|
||||
|
||||
sudo chmod 005 "${PY_SITE_PACKAGES}" 2>/dev/null || true
|
||||
|
||||
sudo chown -R "${NETALERTX_USER}:${NETALERTX_GROUP}" "${NETALERTX_APP}"
|
||||
date +%s | sudo tee "${NETALERTX_FRONT}/buildtimestamp.txt" >/dev/null
|
||||
|
||||
sudo chmod 755 "${NETALERTX_APP}"
|
||||
|
||||
sudo chmod +x /entrypoint.sh
|
||||
setsid bash /entrypoint.sh &
|
||||
sleep 1
|
||||
|
||||
echo "Development $(git rev-parse --short=8 HEAD)" | sudo tee "${NETALERTX_APP}/.VERSION" >/dev/null
|
||||
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
.dockerignore
|
||||
**/.dockerignore
|
||||
.env
|
||||
.git
|
||||
.github
|
||||
.gitignore
|
||||
docker-compose.yml
|
||||
Dockerfile
|
||||
Dockerfile.debian
|
||||
|
||||
3
.flake8
Normal file
3
.flake8
Normal file
@@ -0,0 +1,3 @@
|
||||
[flake8]
|
||||
max-line-length = 180
|
||||
ignore = E221,E222,E251,E203
|
||||
2
.github/FUNDING.yml
vendored
2
.github/FUNDING.yml
vendored
@@ -1,3 +1,3 @@
|
||||
github: jokob-sk
|
||||
patreon: 84385063
|
||||
patreon: netalertx
|
||||
buy_me_a_coffee: jokobsk
|
||||
|
||||
38
.github/ISSUE_TEMPLATE/i-have-an-issue.yml
vendored
38
.github/ISSUE_TEMPLATE/i-have-an-issue.yml
vendored
@@ -44,9 +44,9 @@ body:
|
||||
required: false
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: app.conf
|
||||
label: Relevant `app.conf` settings
|
||||
description: |
|
||||
Paste your `app.conf` (remove personal info)
|
||||
Paste relevant `app.conf`settings (remove sensitive info)
|
||||
render: python
|
||||
validations:
|
||||
required: false
|
||||
@@ -55,7 +55,7 @@ body:
|
||||
label: docker-compose.yml
|
||||
description: |
|
||||
Paste your `docker-compose.yml`
|
||||
render: python
|
||||
render: yaml
|
||||
validations:
|
||||
required: false
|
||||
- type: dropdown
|
||||
@@ -70,21 +70,37 @@ body:
|
||||
- Bare-metal (community only support - Check Discord)
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Debug or Trace enabled
|
||||
description: I confirm I set `LOG_LEVEL` to `debug` or `trace`
|
||||
options:
|
||||
- label: I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: app.log
|
||||
label: Relevant `app.log` section
|
||||
value: |
|
||||
```
|
||||
PASTE LOG HERE. Using the triple backticks preserves format.
|
||||
```
|
||||
description: |
|
||||
Logs with debug enabled (https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md) ⚠
|
||||
***Generally speaking, all bug reports should have logs provided.***
|
||||
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
|
||||
Additionally, any additional info? Screenshots? References? Anything that will give us more context about the issue you are encountering!
|
||||
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files.
|
||||
You can use `tail -100 /app/log/app.log` in the container if you have trouble getting to the log files or send them to netalertx@gmail.com with the issue number.
|
||||
validations:
|
||||
required: false
|
||||
- type: checkboxes
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Debug enabled
|
||||
description: I confirm I enabled `debug`
|
||||
options:
|
||||
- label: I have read and followed the steps in the wiki link above and provided the required debug logs and the log section covers the time when the issue occurs.
|
||||
required: true
|
||||
label: Docker Logs
|
||||
description: |
|
||||
You can retrieve the logs from Portainer -> Containers -> your NetAlertX container -> Logs or by running `sudo docker logs netalertx`.
|
||||
value: |
|
||||
```
|
||||
PASTE DOCKER LOG HERE. Using the triple backticks preserves format.
|
||||
```
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
||||
91
.github/copilot-instructions.md
vendored
Executable file
91
.github/copilot-instructions.md
vendored
Executable file
@@ -0,0 +1,91 @@
|
||||
# NetAlertX AI Assistant Instructions
|
||||
This is NetAlertX — network monitoring & alerting. NetAlertX provides Network inventory, awareness, insight, categorization, intruder and presence detection. This is a heavily community-driven project, welcoming of all contributions.
|
||||
|
||||
You are expected to be concise, opinionated, and biased toward security and simplicity.
|
||||
|
||||
## Architecture (what runs where)
|
||||
- Backend (Python): main loop + GraphQL/REST endpoints orchestrate scans, plugins, workflows, notifications, and JSON export.
|
||||
- Key: `server/__main__.py`, `server/plugin.py`, `server/initialise.py`, `server/api_server/api_server_start.py`
|
||||
- Data (SQLite): persistent state in `db/app.db`; helpers in `server/database.py` and `server/db/*`.
|
||||
- Frontend (Nginx + PHP + JS): UI reads JSON, triggers execution queue events.
|
||||
- Key: `front/`, `front/js/common.js`, `front/php/server/*.php`
|
||||
- Plugins (Python): acquisition/enrichment/publishers under `front/plugins/*` with `config.json` manifests.
|
||||
- Messaging/Workflows: `server/messaging/*`, `server/workflows/*`
|
||||
- API JSON Cache for UI: generated under `api/*.json`
|
||||
|
||||
Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `schedule`, `always_after_scan`, `before_name_updates`, `on_new_device`, `on_notification`, plus ad‑hoc `run` via execution queue. Plugins execute as scripts that write result logs for ingestion.
|
||||
|
||||
## Plugin patterns that matter
|
||||
- Manifest lives at `front/plugins/<code_name>/config.json`; `code_name` == folder, `unique_prefix` drives settings and filenames (e.g., `ARPSCAN`).
|
||||
- Control via settings: `<PREF>_RUN` (phase), `<PREF>_RUN_SCHD` (cron-like), `<PREF>_CMD` (script path), `<PREF>_RUN_TIMEOUT`, `<PREF>_WATCH` (diff columns).
|
||||
- Data contract: scripts write `/tmp/log/plugins/last_result.<PREF>.log` (pipe‑delimited: 9 required cols + optional 4). Use `front/plugins/plugin_helper.py`’s `Plugin_Objects` to sanitize text and normalize MACs, then `write_result_file()`.
|
||||
- Device import: define `database_column_definitions` when creating/updating devices; watched fields trigger notifications.
|
||||
|
||||
### Standard Plugin Formats
|
||||
* publisher: Sends notifications to services. Runs `on_notification`. Data source: self.
|
||||
* dev scanner: Creates devices and manages online/offline status. Runs on `schedule`. Data source: self / SQLite DB.
|
||||
* name discovery: Discovers device names via various protocols. Runs `before_name_updates` or on `schedule`. Data source: self.
|
||||
* importer: Imports devices from another service. Runs on `schedule`. Data source: self / SQLite DB.
|
||||
* system: Provides core system functionality. Runs on `schedule` or is always on. Data source: self / Template.
|
||||
* other: Miscellaneous plugins. Runs at various times. Data source: self / Template.
|
||||
|
||||
### Plugin logging & outputs
|
||||
- Always check relevant logs first.
|
||||
- Use logging as shown in other plugins.
|
||||
- Collect results with `Plugin_Objects.add_object(...)` during processing and call `plugin_objects.write_result_file()` exactly once at the end of the script.
|
||||
- Prefer to log a brief summary before writing (e.g., total objects added) to aid troubleshooting; keep logs concise at `info` level and use `verbose` or `debug` for extra context.
|
||||
|
||||
- Do not write ad‑hoc files for results; the only consumable output is `last_result.<PREF>.log` generated by `Plugin_Objects`.
|
||||
## API/Endpoints quick map
|
||||
- Flask app: `server/api_server/api_server_start.py` exposes routes like `/device/<mac>`, `/devices`, `/devices/export/{csv,json}`, `/devices/import`, `/devices/totals`, `/devices/by-status`, plus `nettools`, `events`, `sessions`, `dbquery`, `metrics`, `sync`.
|
||||
- Authorization: all routes expect header `Authorization: Bearer <API_TOKEN>` via `get_setting_value('API_TOKEN')`.
|
||||
|
||||
## Conventions & helpers to reuse
|
||||
- Settings: add/modify via `ccd()` in `server/initialise.py` or per‑plugin manifest. Never hardcode ports or secrets; use `get_setting_value()`.
|
||||
- Logging: use `logger.mylog(level, [message])`; levels: none/minimal/verbose/debug/trace.
|
||||
- Time/MAC/strings: `helper.py` (`timeNowDB`, `normalize_mac`, sanitizers). Validate MACs before DB writes.
|
||||
- DB helpers: prefer `server/db/db_helper.py` functions (e.g., `get_table_json`, device condition helpers) over raw SQL in new paths.
|
||||
|
||||
## Dev workflow (devcontainer)
|
||||
- **Devcontainer philosophy: brutal simplicity.** One user, everything writable, completely idempotent. No permission checks, no conditional logic, no sudo needed. If something doesn't work, tear down the wall and rebuild - don't patch. We unit test permissions in the hardened build.
|
||||
- **Permissions:** Never `chmod` or `chown` during operations. Everything is already writable. If you need permissions, the devcontainer setup is broken - fix `.devcontainer/scripts/setup.sh` or `.devcontainer/resources/devcontainer-Dockerfile` instead.
|
||||
- **Files & Paths:** Use environment variables (`NETALERTX_DB`, `NETALERTX_LOG`, etc.) everywhere. `/data` for persistent config/db, `/tmp` for runtime logs/api/nginx state. Never hardcode `/data/db` or relative paths.
|
||||
- **Database reset:** Use the `[Dev Container] Wipe and Regenerate Database` task. Kills backend, deletes `/data/{db,config}/*`, runs first-time setup scripts. Clean slate, no questions.
|
||||
- Services: use tasks to (re)start backend and nginx/PHP-FPM. Backend runs with debugpy on 5678; attach a Python debugger if needed.
|
||||
- Run a plugin manually: `python3 front/plugins/<code_name>/script.py` (ensure `sys.path` includes `/app/front/plugins` and `/app/server` like the template).
|
||||
- Testing: pytest available via Alpine packages. Tests live in `test/`; app code is under `server/`. PYTHONPATH is preconfigured to include workspace and `/opt/venv` site‑packages.
|
||||
- **Subprocess calls:** ALWAYS set explicit timeouts. Default to 60s minimum unless plugin config specifies otherwise. Nested subprocess calls (e.g., plugins calling external tools) need their own timeout - outer plugin timeout won't save you.
|
||||
|
||||
## What “done right” looks like
|
||||
- When adding a plugin, start from `front/plugins/__template`, implement with `plugin_helper`, define manifest settings, and wire phase via `<PREF>_RUN`. Verify logs in `/tmp/log/plugins/` and data in `api/*.json`.
|
||||
- When introducing new config, define it once (core `ccd()` or plugin manifest) and read it via helpers everywhere.
|
||||
- When exposing new server functionality, add endpoints in `server/api_server/*` and keep authorization consistent; update UI by reading/writing JSON cache rather than bypassing the pipeline.
|
||||
|
||||
## Useful references
|
||||
- Docs: `docs/PLUGINS_DEV.md`, `docs/SETTINGS_SYSTEM.md`, `docs/API_*.md`, `docs/DEBUG_*.md`
|
||||
- Logs: All logs are under `/tmp/log/`. Plugin logs are very shortly under `/tmp/log/plugins/` until picked up by the server.
|
||||
- plugin logs: `/tmp/log/app.log`
|
||||
- backend logs: `/tmp/log/stdout.log` and `/tmp/log/stderr.log`
|
||||
- frontend commands logs: `/tmp/log/app_front.log`
|
||||
- php errors: `/tmp/log/app.php_errors.log`
|
||||
- nginx logs: `/tmp/log/nginx-access.log` and `/tmp/log/nginx-error.log`
|
||||
|
||||
## Assistant expectations:
|
||||
- Be concise, opinionated, and biased toward security and simplicity.
|
||||
- Reference concrete files/paths/environmental variables.
|
||||
- Use existing helpers/settings.
|
||||
- Offer a quick validation step (log line, API hit, or JSON export) for anything you add.
|
||||
- Be blunt about risks and when you offer suggestions ensure they're also blunt,
|
||||
- Ask for confirmation before making changes that run code or change multiple files.
|
||||
- Make statements actionable and specific; propose exact edits.
|
||||
- Request confirmation before applying changes that affect more than a single, clearly scoped line or file.
|
||||
- Ask the user to debug something for an actionable value if you're unsure.
|
||||
- Be sure to offer choices when appropriate.
|
||||
- Always understand the intent of the user's request and undo/redo as needed.
|
||||
- Above all, use the simplest possible code that meets the need so it can be easily audited and maintained.
|
||||
- Always leave logging enabled. If there is a possiblity it will be difficult to debug with current logging, add more logging.
|
||||
- Always run the testFailure tool before executing any tests to gather current failure information and avoid redundant runs.
|
||||
- Always prioritize using the appropriate tools in the environment first. As an example if a test is failing use `testFailure` then `runTests`. Never `runTests` first.
|
||||
- Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the exisiting testFailures and runTests results.
|
||||
- Environment tools are designed specifically for your use in this project and running them in this order will give you the best results.
|
||||
|
||||
60
.github/workflows/code_checks.yml
vendored
60
.github/workflows/code_checks.yml
vendored
@@ -21,7 +21,8 @@ jobs:
|
||||
run: |
|
||||
echo "🔍 Checking for incorrect absolute '/php/' URLs (should be 'php/' or './php/')..."
|
||||
|
||||
MATCHES=$(grep -rE "['\"]\/php\/" --include=\*.{js,php,html} ./front | grep -E "\.get|\.post|\.ajax|fetch|url\s*:") || true
|
||||
MATCHES=$(grep -rE "['\"]/php/" --include=\*.{js,php,html} ./front \
|
||||
| grep -E "\.get|\.post|\.ajax|fetch|url\s*:") || true
|
||||
|
||||
if [ -n "$MATCHES" ]; then
|
||||
echo "$MATCHES"
|
||||
@@ -39,3 +40,60 @@ jobs:
|
||||
echo "🔍 Checking Python syntax..."
|
||||
find . -name "*.py" -print0 | xargs -0 -n1 python3 -m py_compile
|
||||
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install linting tools
|
||||
run: |
|
||||
# Python linting
|
||||
pip install flake8
|
||||
# Docker linting
|
||||
wget -O /tmp/hadolint https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64
|
||||
chmod +x /tmp/hadolint
|
||||
# PHP and shellcheck for syntax checking
|
||||
sudo apt-get update && sudo apt-get install -y php-cli shellcheck
|
||||
|
||||
- name: Shell check
|
||||
continue-on-error: true
|
||||
run: |
|
||||
echo "🔍 Checking shell scripts..."
|
||||
find . -name "*.sh" -exec shellcheck {} \;
|
||||
|
||||
- name: Python lint
|
||||
continue-on-error: true
|
||||
run: |
|
||||
echo "🔍 Linting Python code..."
|
||||
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
|
||||
|
||||
- name: PHP check
|
||||
continue-on-error: true
|
||||
run: |
|
||||
echo "🔍 Checking PHP syntax..."
|
||||
find . -name "*.php" -exec php -l {} \;
|
||||
|
||||
- name: Docker lint
|
||||
continue-on-error: true
|
||||
run: |
|
||||
echo "🔍 Linting Dockerfiles..."
|
||||
/tmp/hadolint --config .hadolint.yaml Dockerfile* || true
|
||||
|
||||
docker-tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Docker-based tests
|
||||
run: |
|
||||
echo "🐳 Running Docker-based tests..."
|
||||
chmod +x ./test/docker_tests/run_docker_tests.sh
|
||||
./test/docker_tests/run_docker_tests.sh
|
||||
|
||||
23
.github/workflows/docker_dev.yml
vendored
23
.github/workflows/docker_dev.yml
vendored
@@ -10,7 +10,7 @@ on:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
jobs:
|
||||
docker_dev:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
@@ -19,7 +19,8 @@ jobs:
|
||||
packages: write
|
||||
if: >
|
||||
contains(github.event.head_commit.message, 'PUSHPROD') != 'True' &&
|
||||
github.repository == 'jokob-sk/NetAlertX'
|
||||
github.repository == 'jokob-sk/NetAlertX'
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
@@ -30,26 +31,36 @@ jobs:
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
# --- Generate timestamped dev version
|
||||
- name: Generate timestamp version
|
||||
id: timestamp
|
||||
run: |
|
||||
ts=$(date -u +'%Y%m%d-%H%M%S')
|
||||
echo "version=dev-${ts}" >> $GITHUB_OUTPUT
|
||||
echo "Generated version: dev-${ts}"
|
||||
|
||||
- name: Set up dynamic build ARGs
|
||||
id: getargs
|
||||
id: getargs
|
||||
run: echo "version=$(cat ./stable/VERSION)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get release version
|
||||
id: get_version
|
||||
run: echo "version=Dev" >> $GITHUB_OUTPUT
|
||||
|
||||
# --- Write the timestamped version to .VERSION file
|
||||
- name: Create .VERSION file
|
||||
run: echo "${{ steps.get_version.outputs.version }}" >> .VERSION
|
||||
run: echo "${{ steps.timestamp.outputs.version }}" > .VERSION
|
||||
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: |
|
||||
ghcr.io/jokob-sk/netalertx-dev
|
||||
jokobsk/netalertx-dev
|
||||
tags: |
|
||||
type=raw,value=latest
|
||||
type=raw,value=${{ steps.timestamp.outputs.version }}
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=semver,pattern={{version}}
|
||||
@@ -72,7 +83,7 @@ jobs:
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6
|
||||
|
||||
39
.github/workflows/docker_prod.yml
vendored
39
.github/workflows/docker_prod.yml
vendored
@@ -6,7 +6,6 @@
|
||||
# GitHub recommends pinning actions to a commit SHA.
|
||||
# To get a newer version, you will need to update the SHA.
|
||||
# You can also reference a tag or branch, but the action may change without warning.
|
||||
|
||||
name: Publish Docker image
|
||||
|
||||
on:
|
||||
@@ -14,6 +13,7 @@ on:
|
||||
types: [published]
|
||||
tags:
|
||||
- '*.[1-9]+[0-9]?.[1-9]+*'
|
||||
|
||||
jobs:
|
||||
docker:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -21,6 +21,7 @@ jobs:
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
@@ -31,42 +32,51 @@ jobs:
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
# --- Previous approach Get release version from tag
|
||||
- name: Set up dynamic build ARGs
|
||||
id: getargs
|
||||
id: getargs
|
||||
run: echo "version=$(cat ./stable/VERSION)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get release version
|
||||
id: get_version
|
||||
id: get_version_prev
|
||||
run: echo "::set-output name=version::${GITHUB_REF#refs/tags/}"
|
||||
|
||||
- name: Create .VERSION file
|
||||
run: echo "${{ steps.get_version.outputs.version }}" >> .VERSION
|
||||
run: echo "${{ steps.get_version.outputs.version }}" >> .VERSION_PREV
|
||||
|
||||
# --- Get release version from tag
|
||||
- name: Get release version
|
||||
id: get_version
|
||||
run: echo "version=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
|
||||
|
||||
# --- Write version to .VERSION file
|
||||
- name: Create .VERSION file
|
||||
run: echo "${{ steps.get_version.outputs.version }}" > .VERSION
|
||||
|
||||
# --- Generate Docker metadata and tags
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
# list of Docker images to use as base name for tags
|
||||
images: |
|
||||
ghcr.io/jokob-sk/netalertx
|
||||
jokobsk/netalertx
|
||||
# generate Docker tags based on the following events/attributes
|
||||
jokobsk/netalertx
|
||||
tags: |
|
||||
type=semver,pattern={{version}},value=${{ inputs.version }}
|
||||
type=semver,pattern={{major}}.{{minor}},value=${{ inputs.version }}
|
||||
type=semver,pattern={{major}},value=${{ inputs.version }}
|
||||
type=semver,pattern={{version}},value=${{ steps.get_version.outputs.version }}
|
||||
type=semver,pattern={{major}}.{{minor}},value=${{ steps.get_version.outputs.version }}
|
||||
type=semver,pattern={{major}},value=${{ steps.get_version.outputs.version }}
|
||||
type=ref,event=branch,suffix=-{{ sha }}
|
||||
type=ref,event=pr
|
||||
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/') }}
|
||||
|
||||
- name: Log in to Github Container registry
|
||||
- name: Log in to Github Container Registry (GHCR)
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: jokob-sk
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Login to DockerHub
|
||||
- name: Log in to DockerHub
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
@@ -74,13 +84,12 @@ jobs:
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v3
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
# # ⚠ disable cache if build is failing to download debian packages
|
||||
# cache-from: type=registry,ref=ghcr.io/jokob-sk/netalertx:buildcache
|
||||
# cache-to: type=registry,ref=ghcr.io/jokob-sk/netalertx:buildcache,mode=max
|
||||
|
||||
81
.github/workflows/docker_rewrite.yml
vendored
Executable file
81
.github/workflows/docker_rewrite.yml
vendored
Executable file
@@ -0,0 +1,81 @@
|
||||
name: docker
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- rewrite
|
||||
tags:
|
||||
- '*.*.*'
|
||||
pull_request:
|
||||
branches:
|
||||
- rewrite
|
||||
|
||||
jobs:
|
||||
docker_rewrite:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
if: >
|
||||
contains(github.event.head_commit.message, 'PUSHPROD') != 'True' &&
|
||||
github.repository == 'jokob-sk/NetAlertX'
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Set up dynamic build ARGs
|
||||
id: getargs
|
||||
run: echo "version=$(cat ./stable/VERSION)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get release version
|
||||
id: get_version
|
||||
run: echo "version=Dev" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Create .VERSION file
|
||||
run: echo "${{ steps.get_version.outputs.version }}" >> .VERSION
|
||||
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: |
|
||||
ghcr.io/jokob-sk/netalertx-dev-rewrite
|
||||
jokobsk/netalertx-dev-rewrite
|
||||
tags: |
|
||||
type=raw,value=latest
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
type=semver,pattern={{major}}
|
||||
type=sha
|
||||
|
||||
- name: Log in to Github Container Registry (GHCR)
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: jokob-sk
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Log in to DockerHub
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v3
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
12
.gitignore
vendored
12
.gitignore
vendored
@@ -1,6 +1,17 @@
|
||||
.coverage
|
||||
.vscode
|
||||
.dotnet
|
||||
.vscode-server
|
||||
.gitconfig
|
||||
.*CommandMarker
|
||||
deviceid
|
||||
.DS_Store
|
||||
.cache
|
||||
nohup.out
|
||||
config/*
|
||||
.ash_history
|
||||
.VERSION
|
||||
.VERSION_PREV
|
||||
config/pialert.conf
|
||||
config/app.conf
|
||||
db/*
|
||||
@@ -8,6 +19,7 @@ db/pialert.db
|
||||
db/app.db
|
||||
front/log/*
|
||||
/log/*
|
||||
/log/plugins/*
|
||||
front/api/*
|
||||
/api/*
|
||||
**/plugins/**/*.log
|
||||
|
||||
2
.hadolint.yaml
Normal file
2
.hadolint.yaml
Normal file
@@ -0,0 +1,2 @@
|
||||
ignored:
|
||||
- DL3018
|
||||
42
.vscode/launch.json
vendored
Executable file
42
.vscode/launch.json
vendored
Executable file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Python Backend Debug: Attach",
|
||||
"type": "debugpy",
|
||||
"request": "attach",
|
||||
"connect": {
|
||||
"host": "localhost",
|
||||
"port": 5678
|
||||
},
|
||||
"pathMappings": [
|
||||
{
|
||||
// Map workspace root to /app for PHP and other resources, plus explicit server mapping for Python.
|
||||
"localRoot": "${workspaceFolder}",
|
||||
"remoteRoot": "/app"
|
||||
},
|
||||
{
|
||||
"localRoot": "${workspaceFolder}/server",
|
||||
"remoteRoot": "/app/server"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "PHP Frontend Xdebug: Listen",
|
||||
"type": "php",
|
||||
"request": "launch",
|
||||
"port": 9003,
|
||||
"pathMappings": {
|
||||
"/app": "${workspaceFolder}"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Python: Current File",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"program": "${file}",
|
||||
"console": "integratedTerminal",
|
||||
"justMyCode": true
|
||||
}
|
||||
]
|
||||
}
|
||||
33
.vscode/settings.json
vendored
Executable file
33
.vscode/settings.json
vendored
Executable file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"terminal.integrated.suggest.enabled": true,
|
||||
// Use pytest and look under the test/ folder
|
||||
"python.testing.pytestEnabled": true,
|
||||
"python.testing.unittestEnabled": false,
|
||||
"python.testing.pytestArgs": [
|
||||
"test"
|
||||
],
|
||||
// Ensure VS Code uses the devcontainer virtualenv
|
||||
"python.defaultInterpreterPath": "/opt/venv/bin/python",
|
||||
// Let the Python extension invoke pytest via the interpreter; avoid hardcoded paths
|
||||
// Removed python.testing.pytestPath and legacy pytest.command overrides
|
||||
|
||||
"terminal.integrated.defaultProfile.linux": "zsh",
|
||||
"terminal.integrated.profiles.linux": {
|
||||
"zsh": {
|
||||
"path": "/bin/zsh"
|
||||
}
|
||||
}
|
||||
,
|
||||
// Fallback for older VS Code versions or schema validators that don't accept custom profiles
|
||||
"terminal.integrated.shell.linux": "/usr/bin/zsh"
|
||||
,
|
||||
"python.linting.flake8Enabled": true,
|
||||
"python.linting.enabled": true,
|
||||
"python.linting.flake8Args": [
|
||||
"--config=.flake8"
|
||||
],
|
||||
"python.formatting.provider": "black",
|
||||
"python.formatting.blackArgs": [
|
||||
"--line-length=180"
|
||||
]
|
||||
}
|
||||
238
.vscode/tasks.json
vendored
Executable file
238
.vscode/tasks.json
vendored
Executable file
@@ -0,0 +1,238 @@
|
||||
{
|
||||
"version": "2.0.0",
|
||||
"inputs": [
|
||||
{
|
||||
"id": "confirmPrune",
|
||||
"type": "promptString",
|
||||
"description": "DANGER! Type YES to confirm pruning all unused Docker resources. This will destroy containers, images, volumes, and networks!",
|
||||
"default": ""
|
||||
}
|
||||
],
|
||||
"tasks": [
|
||||
{
|
||||
"label": "[Any POSIX] Generate Devcontainer Configs",
|
||||
"type": "shell",
|
||||
"command": ".devcontainer/scripts/generate-configs.sh",
|
||||
"detail": "Generates devcontainer configs from the template. This must be run after changes to devcontainer to combine/merge them into the final config used by VS Code. Note- this has no bearing on the production or test image.",
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"group": "POSIX Tasks"
|
||||
},
|
||||
|
||||
"problemMatcher": [],
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": false
|
||||
},
|
||||
"icon": {
|
||||
"id": "tools",
|
||||
"color": "terminal.ansiYellow"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Any] Docker system and build Prune",
|
||||
"type": "shell",
|
||||
"command": ".devcontainer/scripts/confirm-docker-prune.sh",
|
||||
"detail": "DANGER! Prunes all unused Docker resources (images, containers, volumes, networks). Any stopped container will be wiped and data will be lost. Use with caution.",
|
||||
"options": {
|
||||
"env": {
|
||||
"CONFIRM_PRUNE": "${input:confirmPrune}"
|
||||
}
|
||||
},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"group": "Any"
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": false
|
||||
},
|
||||
"icon": {
|
||||
"id": "trash",
|
||||
"color": "terminal.ansiRed"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Dev Container] Re-Run Startup Script",
|
||||
"type": "shell",
|
||||
"command": "./isDevContainer.sh || exit 1;/workspaces/NetAlertX/.devcontainer/scripts/setup.sh",
|
||||
"detail": "The startup script runs directly after the container is started. It reprovisions permissions, links folders, and performs other setup tasks. Run this if you have made changes to the setup script or need to reprovision the container.",
|
||||
"options": {
|
||||
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
|
||||
},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false
|
||||
},
|
||||
|
||||
"problemMatcher": [],
|
||||
"icon": {
|
||||
"id": "beaker",
|
||||
"color": "terminal.ansiBlue"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Dev Container] Start Backend (Python)",
|
||||
"type": "shell",
|
||||
"command": "./isDevContainer.sh || exit 1; /services/start-backend.sh",
|
||||
"detail": "Restarts the NetAlertX backend (Python) service in the dev container. This may take 5 seconds to be completely ready.",
|
||||
"options": {
|
||||
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
|
||||
},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"clear": false,
|
||||
"group": "Devcontainer"
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"icon": {
|
||||
"id": "debug-restart",
|
||||
"color": "terminal.ansiGreen"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Dev Container] Start CronD (Scheduler)",
|
||||
"type": "shell",
|
||||
"command": "./isDevContainer.sh || exit 1; /services/start-crond.sh",
|
||||
"detail": "Stops and restarts the crond service.",
|
||||
"options": {
|
||||
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
|
||||
},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"clear": false,
|
||||
"group": "Devcontainer"
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"icon": {
|
||||
"id": "debug-restart",
|
||||
"color": "terminal.ansiGreen"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Dev Container] Start Frontend (nginx and PHP-FPM)",
|
||||
"type": "shell",
|
||||
"command": "./isDevContainer.sh || exit 1; /services/start-php-fpm.sh & /services/start-nginx.sh &",
|
||||
"detail": "Stops and restarts the NetAlertX frontend services (nginx and PHP-FPM) in the dev container. This launches almost instantly.",
|
||||
"options": {
|
||||
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
|
||||
|
||||
},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"clear": false,
|
||||
"group": "Devcontainer"
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"icon": {
|
||||
"id": "debug-restart",
|
||||
"color": "terminal.ansiGreen"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Dev Container] Stop Frontend & Backend Services",
|
||||
"type": "shell",
|
||||
"command": "./isDevContainer.sh || exit 1; pkill -f 'php-fpm83|nginx|crond|python3' || true",
|
||||
"detail": "Stops all NetAlertX services running in the dev container.",
|
||||
"options": {
|
||||
"cwd": "/workspaces/NetAlertX/.devcontainer/scripts"
|
||||
},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"group": "Devcontainer"
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"icon": {
|
||||
"id": "debug-stop",
|
||||
"color": "terminal.ansiRed"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Any] Build Unit Test Docker image",
|
||||
"type": "shell",
|
||||
"command": "docker buildx build -t netalertx-test . && echo '🧪 Unit Test Docker image built: netalertx-test'",
|
||||
"detail": "This must be run after changes to the container. Unit testing will not register changes until after this image is rebuilt. It takes about 30 seconds to build unless changes to the venv stage are made. venv takes 90s alone.",
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"group": "Any"
|
||||
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": false
|
||||
},
|
||||
"icon": {
|
||||
"id": "beaker",
|
||||
"color": "terminal.ansiBlue"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "[Dev Container] Wipe and Regenerate Database",
|
||||
"type": "shell",
|
||||
"command": "killall 'python3' || true && sleep 1 && rm -rf /data/db/* /data/config/* && bash /entrypoint.d/15-first-run-config.sh && bash /entrypoint.d/20-first-run-db.sh && echo '✅ Database and config wiped and regenerated'",
|
||||
"detail": "Wipes devcontainer db and config. Provides a fresh start in devcontainer, run this task, then run the Rerun Startup Task",
|
||||
"options": {},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false,
|
||||
"group": "Devcontainer"
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"icon": {
|
||||
"id": "database",
|
||||
"color": "terminal.ansiRed"
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "Build & Launch Prodcution Docker Container",
|
||||
"type": "shell",
|
||||
"command": "docker compose up -d --build --force-recreate",
|
||||
"detail": "Before launching, ensure VSCode Ports are closed and services are stopped. Tasks: Stop Frontend & Backend Services & Remote: Close Unused Forwarded Ports to ensure proper operation of the new container.",
|
||||
"options": {
|
||||
"cwd": "/workspaces/NetAlertX"
|
||||
},
|
||||
"presentation": {
|
||||
"echo": true,
|
||||
"reveal": "always",
|
||||
"panel": "shared",
|
||||
"showReuseMessage": false
|
||||
},
|
||||
"problemMatcher": [],
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": false
|
||||
},
|
||||
"icon": {
|
||||
"id": "package",
|
||||
"color": "terminal.ansiBlue"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -50,4 +50,4 @@ By participating, you agree to follow our [Code of Conduct](./CODE_OF_CONDUCT.md
|
||||
If you have more in-depth questions or want to discuss contributing in other ways, feel free to reach out at:
|
||||
📧 [jokob@duck.com](mailto:jokob@duck.com?subject=NetAlertX%20Contribution)
|
||||
|
||||
We appreciate every contribution, big or small! 💙
|
||||
We appreciate every contribution, big or small! 💙
|
||||
233
Dockerfile
233
Dockerfile
@@ -1,63 +1,220 @@
|
||||
# The NetAlertX Dockerfile has 3 stages:
|
||||
#
|
||||
# Stage 1. Builder - NetAlertX Requires special tools and packages to build our virtual environment, but
|
||||
# which are not needed in future stages. We build the builder and extract the venv for runner to use as
|
||||
# a base.
|
||||
#
|
||||
# Stage 2. Runner builds the bare minimum requirements to create an operational NetAlertX. The primary
|
||||
# reason for breaking at this stage is it leaves the system in a proper state for devcontainer operation
|
||||
# This image also provides a break-out point for uses who wish to execute the anti-pattern of using a
|
||||
# docker container as a VM for experimentation and various development patterns.
|
||||
#
|
||||
# Stage 3. Hardened removes root, sudoers, folders, permissions, and locks the system down into a read-only
|
||||
# compatible image. While NetAlertX does require some read-write operations, this image can guarantee the
|
||||
# code pushed out by the project is the only code which will run on the system after each container restart.
|
||||
# It reduces the chance of system hijacking and operates with all modern security protocols in place as is
|
||||
# expected from a security appliance.
|
||||
#
|
||||
# This file can be built with `docker-compose -f docker-compose.yml up --build --force-recreate`
|
||||
|
||||
FROM alpine:3.22 AS builder
|
||||
|
||||
ARG INSTALL_DIR=/app
|
||||
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
|
||||
# Install build dependencies
|
||||
COPY requirements.txt /tmp/requirements.txt
|
||||
RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git \
|
||||
&& python -m venv /opt/venv
|
||||
|
||||
# Enable venv
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
# Create virtual environment owned by root, but readable by everyone else. This makes it easy to copy
|
||||
# into hardened stage without worrying about permissions and keeps image size small. Keeping the commands
|
||||
# together makes for a slightly smaller image size.
|
||||
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
|
||||
chmod -R u-rwx,g-rwx /opt
|
||||
|
||||
COPY . ${INSTALL_DIR}/
|
||||
|
||||
RUN pip install openwrt-luci-rpc asusrouter asyncio aiohttp graphene flask tplink-omada-client wakeonlan pycryptodome requests paho-mqtt scapy cron-converter pytz json2table dhcp-leases pyunifi speedtest-cli chardet python-nmap dnspython librouteros yattag git+https://github.com/foreign-sub/aiofreepybox.git \
|
||||
&& bash -c "find ${INSTALL_DIR} -type d -exec chmod 750 {} \;" \
|
||||
&& bash -c "find ${INSTALL_DIR} -type f -exec chmod 640 {} \;" \
|
||||
&& bash -c "find ${INSTALL_DIR} -type f \( -name '*.sh' -o -name '*.py' -o -name 'speedtest-cli' \) -exec chmod 750 {} \;"
|
||||
|
||||
# Append Iliadbox certificate to aiofreepybox
|
||||
RUN cat ${INSTALL_DIR}/install/freebox_certificate.pem >> /opt/venv/lib/python3.12/site-packages/aiofreepybox/freebox_certificates.pem
|
||||
|
||||
# second stage
|
||||
# second stage is the main runtime stage with just the minimum required to run the application
|
||||
# The runner is used for both devcontainer, and as a base for the hardened stage.
|
||||
FROM alpine:3.22 AS runner
|
||||
|
||||
ARG INSTALL_DIR=/app
|
||||
|
||||
COPY --from=builder /opt/venv /opt/venv
|
||||
COPY --from=builder /usr/sbin/usermod /usr/sbin/groupmod /usr/sbin/
|
||||
# NetAlertX app directories
|
||||
ENV NETALERTX_APP=${INSTALL_DIR}
|
||||
ENV NETALERTX_DATA=/data
|
||||
ENV NETALERTX_CONFIG=${NETALERTX_DATA}/config
|
||||
ENV NETALERTX_FRONT=${NETALERTX_APP}/front
|
||||
ENV NETALERTX_PLUGINS=${NETALERTX_FRONT}/plugins
|
||||
ENV NETALERTX_SERVER=${NETALERTX_APP}/server
|
||||
ENV NETALERTX_API=/tmp/api
|
||||
ENV NETALERTX_DB=${NETALERTX_DATA}/db
|
||||
ENV NETALERTX_DB_FILE=${NETALERTX_DB}/app.db
|
||||
ENV NETALERTX_BACK=${NETALERTX_APP}/back
|
||||
ENV NETALERTX_LOG=/tmp/log
|
||||
ENV NETALERTX_PLUGINS_LOG=${NETALERTX_LOG}/plugins
|
||||
ENV NETALERTX_CONFIG_FILE=${NETALERTX_CONFIG}/app.conf
|
||||
|
||||
# Enable venv
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
# NetAlertX log files
|
||||
ENV LOG_IP_CHANGES=${NETALERTX_LOG}/IP_changes.log
|
||||
ENV LOG_APP=${NETALERTX_LOG}/app.log
|
||||
ENV LOG_APP_FRONT=${NETALERTX_LOG}/app_front.log
|
||||
ENV LOG_REPORT_OUTPUT_TXT=${NETALERTX_LOG}/report_output.txt
|
||||
ENV LOG_DB_IS_LOCKED=${NETALERTX_LOG}/db_is_locked.log
|
||||
ENV LOG_REPORT_OUTPUT_HTML=${NETALERTX_LOG}/report_output.html
|
||||
ENV LOG_STDERR=${NETALERTX_LOG}/stderr.log
|
||||
ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
|
||||
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
|
||||
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
|
||||
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
|
||||
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
|
||||
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
|
||||
|
||||
# default port and listen address
|
||||
ENV PORT=20211 LISTEN_ADDR=0.0.0.0
|
||||
# System Services configuration files
|
||||
ENV ENTRYPOINT_CHECKS=/entrypoint.d
|
||||
ENV SYSTEM_SERVICES=/services
|
||||
ENV SYSTEM_SERVICES_SCRIPTS=${SYSTEM_SERVICES}/scripts
|
||||
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
|
||||
ENV SYSTEM_NGINX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
|
||||
ENV SYSTEM_NGINX_CONFIG_TEMPLATE=${SYSTEM_NGINX_CONFIG}/netalertx.conf.template
|
||||
ENV SYSTEM_SERVICES_CONFIG_CRON=${SYSTEM_SERVICES_CONFIG}/cron
|
||||
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
|
||||
ENV SYSTEM_SERVICES_ACTIVE_CONFIG_FILE=${SYSTEM_SERVICES_ACTIVE_CONFIG}/nginx.conf
|
||||
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
|
||||
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
|
||||
ENV SYSTEM_SERVICES_RUN=/tmp/run
|
||||
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
|
||||
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
|
||||
ENV PHP_FPM_CONFIG_FILE=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.conf
|
||||
ENV READ_ONLY_FOLDERS="${NETALERTX_BACK} ${NETALERTX_FRONT} ${NETALERTX_SERVER} ${SYSTEM_SERVICES} \
|
||||
${SYSTEM_SERVICES_CONFIG} ${ENTRYPOINT_CHECKS}"
|
||||
ENV READ_WRITE_FOLDERS="${NETALERTX_DATA} ${NETALERTX_CONFIG} ${NETALERTX_DB} ${NETALERTX_API} \
|
||||
${NETALERTX_LOG} ${NETALERTX_PLUGINS_LOG} ${SYSTEM_SERVICES_RUN} \
|
||||
${SYSTEM_SERVICES_RUN_TMP} ${SYSTEM_SERVICES_RUN_LOG} \
|
||||
${SYSTEM_SERVICES_ACTIVE_CONFIG}"
|
||||
|
||||
# needed for s6-overlay
|
||||
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
|
||||
#Python environment
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV VIRTUAL_ENV=/opt/venv
|
||||
ENV VIRTUAL_ENV_BIN=/opt/venv/bin
|
||||
ENV PYTHONPATH=${NETALERTX_APP}:${NETALERTX_SERVER}:${NETALERTX_PLUGINS}:${VIRTUAL_ENV}/lib/python3.12/site-packages
|
||||
ENV PATH="${SYSTEM_SERVICES}:${VIRTUAL_ENV_BIN}:$PATH"
|
||||
|
||||
# ❗ IMPORTANT - if you modify this file modify the /install/install_dependecies.sh file as well ❗
|
||||
# App Environment
|
||||
ENV LISTEN_ADDR=0.0.0.0
|
||||
ENV PORT=20211
|
||||
ENV NETALERTX_DEBUG=0
|
||||
ENV VENDORSPATH=/app/back/ieee-oui.txt
|
||||
ENV VENDORSPATH_NEWEST=${SYSTEM_SERVICES_RUN_TMP}/ieee-oui.txt
|
||||
ENV ENVIRONMENT=alpine
|
||||
ENV READ_ONLY_USER=readonly READ_ONLY_GROUP=readonly
|
||||
ENV NETALERTX_USER=netalertx NETALERTX_GROUP=netalertx
|
||||
ENV LANG=C.UTF-8
|
||||
|
||||
RUN apk update --no-cache \
|
||||
&& apk add --no-cache bash libbsd zip lsblk gettext-envsubst sudo mtr tzdata s6-overlay \
|
||||
&& apk add --no-cache curl arp-scan iproute2 iproute2-ss nmap nmap-scripts traceroute nbtscan avahi avahi-tools openrc dbus net-tools net-snmp-tools bind-tools awake ca-certificates \
|
||||
&& apk add --no-cache sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session \
|
||||
&& apk add --no-cache python3 nginx \
|
||||
&& ln -s /usr/bin/awake /usr/bin/wakeonlan \
|
||||
&& bash -c "install -d -m 750 -o nginx -g www-data ${INSTALL_DIR} ${INSTALL_DIR}" \
|
||||
&& rm -f /etc/nginx/http.d/default.conf
|
||||
|
||||
COPY --from=builder --chown=nginx:www-data ${INSTALL_DIR}/ ${INSTALL_DIR}/
|
||||
RUN apk add --no-cache bash mtr libbsd zip lsblk tzdata curl arp-scan iproute2 iproute2-ss nmap \
|
||||
nmap-scripts traceroute nbtscan net-tools net-snmp-tools bind-tools awake ca-certificates \
|
||||
sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session python3 envsubst \
|
||||
nginx supercronic shadow && \
|
||||
rm -Rf /var/cache/apk/* && \
|
||||
rm -Rf /etc/nginx && \
|
||||
addgroup -g 20211 ${NETALERTX_GROUP} && \
|
||||
adduser -u 20211 -D -h ${NETALERTX_APP} -G ${NETALERTX_GROUP} ${NETALERTX_USER} && \
|
||||
apk del shadow
|
||||
|
||||
# Add crontab file
|
||||
COPY --chmod=600 --chown=root:root install/crontab /etc/crontabs/root
|
||||
|
||||
# Start all required services
|
||||
RUN ${INSTALL_DIR}/dockerfiles/start.sh
|
||||
|
||||
HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=2 \
|
||||
CMD curl -sf -o /dev/null ${LISTEN_ADDR}:${PORT}/php/server/query_json.php?file=app_state.json
|
||||
# Install application, copy files, set permissions
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} install/production-filesystem/ /
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 back ${NETALERTX_BACK}
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 front ${NETALERTX_FRONT}
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} --chmod=755 server ${NETALERTX_SERVER}
|
||||
|
||||
# Create required folders with correct ownership and permissions
|
||||
RUN install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
|
||||
sh -c "find ${NETALERTX_APP} -type f \( -name '*.sh' -o -name 'speedtest-cli' \) \
|
||||
-exec chmod 750 {} \;"
|
||||
|
||||
# Copy version information into the image
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION
|
||||
COPY --chown=${NETALERTX_USER}:${NETALERTX_GROUP} .[V]ERSION ${NETALERTX_APP}/.VERSION_PREV
|
||||
|
||||
# Copy the virtualenv from the builder stage
|
||||
COPY --from=builder --chown=20212:20212 ${VIRTUAL_ENV} ${VIRTUAL_ENV}
|
||||
|
||||
|
||||
# Initialize each service with the dockerfiles/init-*.sh scripts, once.
|
||||
# This is done after the copy of the venv to ensure the venv is in place
|
||||
# although it may be quicker to do it before the copy, it keeps the image
|
||||
# layers smaller to do it after.
|
||||
RUN for vfile in .VERSION .VERSION_PREV; do \
|
||||
if [ ! -f "${NETALERTX_APP}/${vfile}" ]; then \
|
||||
echo "DEVELOPMENT 00000000" > "${NETALERTX_APP}/${vfile}"; \
|
||||
fi; \
|
||||
chown 20212:20212 "${NETALERTX_APP}/${vfile}"; \
|
||||
done && \
|
||||
apk add --no-cache libcap && \
|
||||
setcap cap_net_raw+ep /bin/busybox && \
|
||||
setcap cap_net_raw,cap_net_admin+eip /usr/bin/nmap && \
|
||||
setcap cap_net_raw,cap_net_admin+eip /usr/bin/arp-scan && \
|
||||
setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip /usr/bin/nbtscan && \
|
||||
setcap cap_net_raw,cap_net_admin+eip /usr/bin/traceroute && \
|
||||
setcap cap_net_raw,cap_net_admin+eip "$(readlink -f ${VIRTUAL_ENV_BIN}/python)" && \
|
||||
/bin/sh /build/init-nginx.sh && \
|
||||
/bin/sh /build/init-php-fpm.sh && \
|
||||
/bin/sh /build/init-cron.sh && \
|
||||
/bin/sh /build/init-backend.sh && \
|
||||
rm -rf /build && \
|
||||
apk del libcap && \
|
||||
date +%s > "${NETALERTX_FRONT}/buildtimestamp.txt"
|
||||
|
||||
|
||||
ENTRYPOINT ["/bin/sh","/entrypoint.sh"]
|
||||
|
||||
# Final hardened stage to improve security by setting least possible permissions and removing sudo access.
|
||||
# When complete, if the image is compromised, there's not much that can be done with it.
|
||||
# This stage is separate from Runner stage so that devcontainer can use the Runner stage.
|
||||
FROM runner AS hardened
|
||||
|
||||
ENV UMASK=0077
|
||||
|
||||
# Create readonly user and group with no shell access.
|
||||
# Readonly user marks folders that are created by NetAlertX, but should not be modified.
|
||||
# AI may claim this is stupid, but it's actually least possible permissions as
|
||||
# read-only user cannot login, cannot sudo, has no write permission, and cannot even
|
||||
# read the files it owns. The read-only user is ownership-as-a-lock hardening pattern.
|
||||
RUN addgroup -g 20212 "${READ_ONLY_GROUP}" && \
|
||||
adduser -u 20212 -G "${READ_ONLY_GROUP}" -D -h /app "${READ_ONLY_USER}"
|
||||
|
||||
|
||||
# reduce permissions to minimum necessary for all NetAlertX files and folders
|
||||
# Permissions 005 and 004 are not typos, they enable read-only. Everyone can
|
||||
# read the read-only files, and nobody can write to them, even the readonly user.
|
||||
|
||||
# hadolint ignore=SC2114
|
||||
RUN chown -R ${READ_ONLY_USER}:${READ_ONLY_GROUP} ${READ_ONLY_FOLDERS} && \
|
||||
chmod -R 004 ${READ_ONLY_FOLDERS} && \
|
||||
find ${READ_ONLY_FOLDERS} -type d -exec chmod 005 {} + && \
|
||||
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 ${READ_WRITE_FOLDERS} && \
|
||||
chown -R ${NETALERTX_USER}:${NETALERTX_GROUP} ${READ_WRITE_FOLDERS} && \
|
||||
chmod -R 600 ${READ_WRITE_FOLDERS} && \
|
||||
find ${READ_WRITE_FOLDERS} -type d -exec chmod 700 {} + && \
|
||||
chown ${READ_ONLY_USER}:${READ_ONLY_GROUP} /entrypoint.sh /opt /opt/venv && \
|
||||
chmod 005 /entrypoint.sh ${SYSTEM_SERVICES}/*.sh ${SYSTEM_SERVICES_SCRIPTS}/* ${ENTRYPOINT_CHECKS}/* /app /opt /opt/venv && \
|
||||
for dir in ${READ_WRITE_FOLDERS}; do \
|
||||
install -d -o ${NETALERTX_USER} -g ${NETALERTX_GROUP} -m 700 "$dir"; \
|
||||
done && \
|
||||
apk del apk-tools && \
|
||||
rm -Rf /var /etc/sudoers.d/* /etc/shadow /etc/gshadow /etc/sudoers \
|
||||
/lib/apk /lib/firmware /lib/modules-load.d /lib/sysctl.d /mnt /home/ /root \
|
||||
/srv /media && \
|
||||
sed -i "/^\(${READ_ONLY_USER}\|${NETALERTX_USER}\):/!d" /etc/passwd && \
|
||||
sed -i "/^\(${READ_ONLY_GROUP}\|${NETALERTX_GROUP}\):/!d" /etc/group && \
|
||||
printf '#!/bin/sh\n"$@"\n' > /usr/bin/sudo && chmod +x /usr/bin/sudo
|
||||
|
||||
USER netalertx
|
||||
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
||||
CMD /services/healthcheck.sh
|
||||
|
||||
ENTRYPOINT ["/init"]
|
||||
|
||||
@@ -1,53 +1,176 @@
|
||||
# Warning - use of this unhardened image is not recommended for production use.
|
||||
# This image is provided for backward compatibility, development and testing purposes only.
|
||||
# For production use, please use the hardened image built with Alpine. This image attempts to
|
||||
# treat a container as an operating system, which is an anti-pattern and a common source of
|
||||
# security issues.
|
||||
#
|
||||
# The default Dockerfile/docker-compose image contains the following security improvements
|
||||
# over the Debian image:
|
||||
# - read-only filesystem
|
||||
# - no sudo access
|
||||
# - least possible permissions on all files and folders
|
||||
# - Root user has all permissions revoked and is unused
|
||||
# - Secure umask applied so files are owner-only by default
|
||||
# - non-privileged user runs the application
|
||||
# - no shell access for non-privileged users
|
||||
# - no unnecessary packages or services
|
||||
# - reduced capabilities
|
||||
# - tmpfs for writable folders
|
||||
# - healthcheck
|
||||
# - no package managers
|
||||
# - no compilers or build tools
|
||||
# - no systemd, uses lightweight init system
|
||||
# - no persistent storage except for config and db volumes
|
||||
# - minimal image size due to segmented build stages
|
||||
# - minimal base image (Alpine Linux)
|
||||
# - minimal python environment (venv, no pip)
|
||||
# - minimal stripped web server
|
||||
# - minimal stripped php environment
|
||||
# - minimal services (nginx, php-fpm, crond, no unnecessary services or service managers)
|
||||
# - minimal users and groups (netalertx and readonly only, no others)
|
||||
# - minimal permissions (read-only for most files and folders, write-only for necessary folders)
|
||||
# - minimal capabilities (NET_ADMIN and NET_RAW only, no others)
|
||||
# - minimal environment variables (only necessary ones, no others)
|
||||
# - minimal entrypoint (only necessary commands, no others)
|
||||
# - Uses the same base image as the development environmnment (Alpine Linux)
|
||||
# - Uses the same services as the development environment (nginx, php-fpm, crond)
|
||||
# - Uses the same environment variables as the development environment (only necessary ones, no others)
|
||||
# - Uses the same file and folder structure as the development environment (only necessary ones, no others)
|
||||
# NetAlertX is designed to be run as an unattended network security monitoring appliance, which means it
|
||||
# should be able to operate without human intervention. Overall, the hardened image is designed to be as
|
||||
# secure as possible while still being functional and is recommended because you cannot attack a surface
|
||||
# that isn't there.
|
||||
|
||||
|
||||
FROM debian:bookworm-slim
|
||||
|
||||
# default UID and GID
|
||||
ENV USER=pi USER_ID=1000 USER_GID=1000 PORT=20211
|
||||
#TZ=Europe/London
|
||||
|
||||
# NetAlertX app directories
|
||||
ENV INSTALL_DIR=/app
|
||||
ENV NETALERTX_APP=${INSTALL_DIR}
|
||||
ENV NETALERTX_DATA=/data
|
||||
ENV NETALERTX_CONFIG=${NETALERTX_DATA}/config
|
||||
ENV NETALERTX_FRONT=${NETALERTX_APP}/front
|
||||
ENV NETALERTX_SERVER=${NETALERTX_APP}/server
|
||||
ENV NETALERTX_API=/tmp/api
|
||||
ENV NETALERTX_DB=${NETALERTX_DATA}/db
|
||||
ENV NETALERTX_DB_FILE=${NETALERTX_DB}/app.db
|
||||
ENV NETALERTX_BACK=${NETALERTX_APP}/back
|
||||
ENV NETALERTX_LOG=/tmp/log
|
||||
ENV NETALERTX_PLUGINS_LOG=${NETALERTX_LOG}/plugins
|
||||
|
||||
# NetAlertX log files
|
||||
ENV LOG_IP_CHANGES=${NETALERTX_LOG}/IP_changes.log
|
||||
ENV LOG_APP=${NETALERTX_LOG}/app.log
|
||||
ENV LOG_APP_FRONT=${NETALERTX_LOG}/app_front.log
|
||||
ENV LOG_REPORT_OUTPUT_TXT=${NETALERTX_LOG}/report_output.txt
|
||||
ENV LOG_DB_IS_LOCKED=${NETALERTX_LOG}/db_is_locked.log
|
||||
ENV LOG_REPORT_OUTPUT_HTML=${NETALERTX_LOG}/report_output.html
|
||||
ENV LOG_STDERR=${NETALERTX_LOG}/stderr.log
|
||||
ENV LOG_APP_PHP_ERRORS=${NETALERTX_LOG}/app.php_errors.log
|
||||
ENV LOG_EXECUTION_QUEUE=${NETALERTX_LOG}/execution_queue.log
|
||||
ENV LOG_REPORT_OUTPUT_JSON=${NETALERTX_LOG}/report_output.json
|
||||
ENV LOG_STDOUT=${NETALERTX_LOG}/stdout.log
|
||||
ENV LOG_CRON=${NETALERTX_LOG}/cron.log
|
||||
ENV LOG_NGINX_ERROR=${NETALERTX_LOG}/nginx-error.log
|
||||
|
||||
# System Services configuration files
|
||||
ENV SYSTEM_SERVICES=/services
|
||||
ENV SYSTEM_SERVICES_CONFIG=${SYSTEM_SERVICES}/config
|
||||
ENV SYSTEM_NGINIX_CONFIG=${SYSTEM_SERVICES_CONFIG}/nginx
|
||||
ENV SYSTEM_NGINX_CONFIG_FILE=${SYSTEM_NGINIX_CONFIG}/nginx.conf
|
||||
ENV SYSTEM_SERVICES_ACTIVE_CONFIG=/tmp/nginx/active-config
|
||||
ENV NETALERTX_CONFIG_FILE=${NETALERTX_CONFIG}/app.conf
|
||||
ENV SYSTEM_SERVICES_PHP_FOLDER=${SYSTEM_SERVICES_CONFIG}/php
|
||||
ENV SYSTEM_SERVICES_PHP_FPM_D=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.d
|
||||
ENV SYSTEM_SERVICES_CROND=${SYSTEM_SERVICES_CONFIG}/crond
|
||||
ENV SYSTEM_SERVICES_RUN=/tmp/run
|
||||
ENV SYSTEM_SERVICES_RUN_TMP=${SYSTEM_SERVICES_RUN}/tmp
|
||||
ENV SYSTEM_SERVICES_RUN_LOG=${SYSTEM_SERVICES_RUN}/logs
|
||||
ENV PHP_FPM_CONFIG_FILE=${SYSTEM_SERVICES_PHP_FOLDER}/php-fpm.conf
|
||||
|
||||
#Python environment
|
||||
ENV PYTHONPATH=${NETALERTX_SERVER}
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV VIRTUAL_ENV=/opt/venv
|
||||
ENV VIRTUAL_ENV_BIN=/opt/venv/bin
|
||||
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}:/services"
|
||||
ENV VENDORSPATH=/app/back/ieee-oui.txt
|
||||
ENV VENDORSPATH_NEWEST=${SYSTEM_SERVICES_RUN_TMP}/ieee-oui.txt
|
||||
|
||||
|
||||
# App Environment
|
||||
ENV LISTEN_ADDR=0.0.0.0
|
||||
ENV PORT=20211
|
||||
ENV NETALERTX_DEBUG=0
|
||||
|
||||
#Container environment
|
||||
ENV ENVIRONMENT=debian
|
||||
ENV USER=netalertx
|
||||
ENV USER_ID=1000
|
||||
ENV USER_GID=1000
|
||||
|
||||
# Todo, figure out why using a workdir instead of full paths don't work
|
||||
# Todo, do we still need all these packages? I can already see sudo which isn't needed
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install sudo -y
|
||||
|
||||
ARG INSTALL_DIR=/app
|
||||
|
||||
# create pi user and group
|
||||
# add root and www-data to pi group so they can r/w files and db
|
||||
RUN groupadd --gid "${USER_GID}" "${USER}" && \
|
||||
useradd \
|
||||
--uid ${USER_ID} \
|
||||
--gid ${USER_GID} \
|
||||
--create-home \
|
||||
--shell /bin/bash \
|
||||
${USER} && \
|
||||
--uid ${USER_ID} \
|
||||
--gid ${USER_GID} \
|
||||
--create-home \
|
||||
--shell /bin/bash \
|
||||
${USER} && \
|
||||
usermod -a -G ${USER_GID} root && \
|
||||
usermod -a -G ${USER_GID} www-data
|
||||
|
||||
COPY --chmod=775 --chown=${USER_ID}:${USER_GID} install/production-filesystem/ /
|
||||
COPY --chmod=775 --chown=${USER_ID}:${USER_GID} . ${INSTALL_DIR}/
|
||||
|
||||
|
||||
# ❗ IMPORTANT - if you modify this file modify the /install/install_dependecies.debian.sh file as well ❗
|
||||
# hadolint ignore=DL3008,DL3027
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
tini snmp ca-certificates curl libwww-perl arp-scan sudo gettext-base \
|
||||
nginx-light php php-cgi php-fpm php-sqlite3 php-curl sqlite3 dnsutils net-tools \
|
||||
python3 python3-dev iproute2 nmap python3-pip zip git systemctl usbutils traceroute nbtscan openrc \
|
||||
busybox nginx nginx-core mtr python3-venv && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
||||
RUN apt-get install -y \
|
||||
tini snmp ca-certificates curl libwww-perl arp-scan perl apt-utils cron sudo \
|
||||
nginx-light php php-cgi php-fpm php-sqlite3 php-curl sqlite3 dnsutils net-tools php-openssl \
|
||||
python3 python3-dev iproute2 nmap python3-pip zip systemctl usbutils traceroute nbtscan avahi avahi-tools openrc dbus
|
||||
|
||||
# Alternate dependencies
|
||||
RUN apt-get install nginx nginx-core mtr php-fpm php8.2-fpm php-cli php8.2 php8.2-sqlite3 -y
|
||||
RUN phpenmod -v 8.2 sqlite3
|
||||
# While php8.3 is in debian bookworm repos, php-fpm is not included so we need to add sury.org repo
|
||||
# (Ondřej Surý maintains php packages for debian. This is temp until debian includes php-fpm in their
|
||||
# repos. Likely it will be in Debian Trixie.). This keeps the image up-to-date with the alpine version.
|
||||
# hadolint ignore=DL3008
|
||||
RUN apt-get install -y --no-install-recommends \
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
lsb-release \
|
||||
wget && \
|
||||
wget -q -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg && \
|
||||
echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends php8.3-fpm php8.3-cli php8.3-sqlite3 php8.3-common php8.3-curl php8.3-cgi && \
|
||||
ln -s /usr/sbin/php-fpm8.3 /usr/sbin/php-fpm83 && \
|
||||
rm -rf /var/lib/apt/lists/* # make it compatible with alpine version
|
||||
|
||||
# Setup virtual python environment and use pip3 to install packages
|
||||
RUN apt-get install -y python3-venv
|
||||
RUN python3 -m venv myenv
|
||||
RUN python3 -m venv ${VIRTUAL_ENV} && \
|
||||
/bin/bash -c "source ${VIRTUAL_ENV_BIN}/activate && update-alternatives --install /usr/bin/python python /usr/bin/python3 10 && pip3 install -r ${INSTALL_DIR}/requirements.txt"
|
||||
|
||||
# Configure php-fpm
|
||||
RUN chmod -R 755 /services && \
|
||||
chown -R ${USER}:${USER_GID} /services && \
|
||||
sed -i 's/^;listen.mode = .*/listen.mode = 0666/' ${SYSTEM_SERVICES_PHP_FPM_D}/www.conf && \
|
||||
printf "user = %s\ngroup = %s\n" "${USER}" "${USER_GID}" >> /services/config/php/php-fpm.d/www.conf
|
||||
|
||||
|
||||
RUN /bin/bash -c "source myenv/bin/activate && update-alternatives --install /usr/bin/python python /usr/bin/python3 10 && pip3 install openwrt-luci-rpc asusrouter asyncio aiohttp graphene flask tplink-omada-client wakeonlan pycryptodome requests paho-mqtt scapy cron-converter pytz json2table dhcp-leases pyunifi speedtest-cli chardet python-nmap dnspython librouteros yattag "
|
||||
|
||||
# Create a buildtimestamp.txt to later check if a new version was released
|
||||
RUN date +%s > ${INSTALL_DIR}/front/buildtimestamp.txt
|
||||
|
||||
CMD ["${INSTALL_DIR}/install/start.debian.sh"]
|
||||
USER netalertx:netalertx
|
||||
ENTRYPOINT ["/bin/bash","/entrypoint.sh"]
|
||||
|
||||
|
||||
|
||||
108
README.md
108
README.md
@@ -6,38 +6,60 @@
|
||||
|
||||
# NetAlertX - Network, presence scanner and alert framework
|
||||
|
||||
Get visibility of what's going on on your WIFI/LAN network and enable presence detection of important devices. Schedule scans for devices, port changes and get alerts if unknown devices or changes are found. Write your own [Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) with auto-generated UI and in-build notification system. Build out and easily maintain your network source of truth (NSoT).
|
||||
Get visibility of what's going on on your WIFI/LAN network and enable presence detection of important devices. Schedule scans for devices, port changes and get alerts if unknown devices or changes are found. Write your own [Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) with auto-generated UI and in-build notification system. Build out and easily maintain your network source of truth (NSoT) and device inventory.
|
||||
|
||||
## 📋 Table of Contents
|
||||
|
||||
- [Features](#-features)
|
||||
- [Documentation](#-documentation)
|
||||
- [Quick Start](#-quick-start)
|
||||
- [Alternative Apps](#-other-alternative-apps)
|
||||
- [Security & Privacy](#-security--privacy)
|
||||
- [FAQ](#-faq)
|
||||
- [Known Issues](#-known-issues)
|
||||
- [Donations](#-donations)
|
||||
- [Contributors](#-contributors)
|
||||
- [Translations](#-translations)
|
||||
- [License](#license)
|
||||
- [NetAlertX - Network, presence scanner and alert framework](#netalertx---network-presence-scanner-and-alert-framework)
|
||||
- [📋 Table of Contents](#-table-of-contents)
|
||||
- [🚀 Quick Start](#-quick-start)
|
||||
- [📦 Features](#-features)
|
||||
- [Scanners](#scanners)
|
||||
- [Notification gateways](#notification-gateways)
|
||||
- [Integrations and Plugins](#integrations-and-plugins)
|
||||
- [Workflows](#workflows)
|
||||
- [📚 Documentation](#-documentation)
|
||||
- [🔐 Security \& Privacy](#-security--privacy)
|
||||
- [❓ FAQ](#-faq)
|
||||
- [🐞 Known Issues](#-known-issues)
|
||||
- [📃 Everything else](#-everything-else)
|
||||
- [📧 Get notified what's new](#-get-notified-whats-new)
|
||||
- [🔀 Other Alternative Apps](#-other-alternative-apps)
|
||||
- [💙 Donations](#-donations)
|
||||
- [🏗 Contributors](#-contributors)
|
||||
- [🌍 Translations](#-translations)
|
||||
- [License](#license)
|
||||
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
> [!WARNING]
|
||||
> ⚠️ **Important:** The docker-compose has recently changed. Carefully read the [Migration guide](https://jokob-sk.github.io/NetAlertX/MIGRATION/?h=migrat#12-migration-from-netalertx-v25524) for detailed instructions.
|
||||
|
||||
Start NetAlertX in seconds with Docker:
|
||||
|
||||
```bash
|
||||
docker run -d --rm --network=host \
|
||||
-v local_path/config:/app/config \
|
||||
-v local_path/db:/app/db \
|
||||
--mount type=tmpfs,target=/app/api \
|
||||
-e PUID=200 -e PGID=300 \
|
||||
-e TZ=Europe/Berlin \
|
||||
docker run -d \
|
||||
--network=host \
|
||||
--restart unless-stopped \
|
||||
-v /local_data_dir:/data \
|
||||
-v /etc/localtime:/etc/localtime:ro \
|
||||
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
|
||||
-e PORT=20211 \
|
||||
-e APP_CONF_OVERRIDE='{"GRAPHQL_PORT":"20214"}' \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
```
|
||||
|
||||
Note: Your `/local_data_dir` should contain a `config` and `db` folder.
|
||||
|
||||
To deploy a containerized instance directly from the source repository, execute the following BASH sequence:
|
||||
```bash
|
||||
git clone https://github.com/jokob-sk/NetAlertX.git
|
||||
cd NetAlertX
|
||||
docker compose up --force-recreate --build
|
||||
# To customize: edit docker-compose.yaml and run that last command again
|
||||
```
|
||||
|
||||
Need help configuring it? Check the [usage guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/README.md) or [full documentation](https://jokob-sk.github.io/NetAlertX/).
|
||||
|
||||
For Home Assistant users: [Click here to add NetAlertX](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
|
||||
@@ -45,10 +67,10 @@ For Home Assistant users: [Click here to add NetAlertX](https://my.home-assistan
|
||||
For other install methods, check the [installation docs](#-documentation)
|
||||
|
||||
|
||||
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|
||||
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
|
||||
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|
||||
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
|
||||
|
||||
![showcase][showcase]
|
||||
![showcase][showcase]
|
||||
|
||||
<details>
|
||||
<summary>📷 Click for more screenshots</summary>
|
||||
@@ -66,15 +88,15 @@ For other install methods, check the [installation docs](#-documentation)
|
||||
|
||||
### Scanners
|
||||
|
||||
The app scans your network for **New devices**, **New connections** (re-connections), **Disconnections**, **"Always Connected" devices down**, Devices **IP changes** and **Internet IP address changes**. Discovery & scan methods include: **arp-scan**, **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**, **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) docs for a full list of avaliable plugins.
|
||||
The app scans your network for **New devices**, **New connections** (re-connections), **Disconnections**, **"Always Connected" devices down**, Devices **IP changes** and **Internet IP address changes**. Discovery & scan methods include: **arp-scan**, **Pi-hole - DB import**, **Pi-hole - DHCP leases import**, **Generic DHCP leases import**, **UNIFI controller import**, **SNMP-enabled router import**. Check the [Plugins](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) docs for a full list of avaliable plugins.
|
||||
|
||||
### Notification gateways
|
||||
|
||||
Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use native [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/) publishers.
|
||||
Send notifications to more than 80+ services, including Telegram via [Apprise](https://hub.docker.com/r/caronc/apprise), or use native [Pushsafer](https://www.pushsafer.com/), [Pushover](https://www.pushover.net/), or [NTFY](https://ntfy.sh/) publishers.
|
||||
|
||||
### Integrations and Plugins
|
||||
|
||||
Feed your data and device changes into [Home Assistant](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HOME_ASSISTANT.md), read [API endpoints](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md), or use [Webhooks](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEBHOOK_N8N.md) to setup custom automation flows. You can also
|
||||
Feed your data and device changes into [Home Assistant](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HOME_ASSISTANT.md), read [API endpoints](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md), or use [Webhooks](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEBHOOK_N8N.md) to setup custom automation flows. You can also
|
||||
build your own scanners with the [Plugin system](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md#readme) in as little as [15 minutes](https://www.youtube.com/watch?v=cdbxlwiWhv8).
|
||||
|
||||
### Workflows
|
||||
@@ -87,10 +109,10 @@ The [workflows module](https://github.com/jokob-sk/NetAlertX/blob/main/docs/WORK
|
||||
|
||||
Supported browsers: Chrome, Firefox
|
||||
|
||||
- [[Installation] Docker](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md)
|
||||
- [[Installation] Home Assistant](https://github.com/alexbelgium/hassio-addons/tree/master/netalertx)
|
||||
- [[Installation] Bare metal](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md)
|
||||
- [[Installation] Unraid App](https://unraid.net/community/apps)
|
||||
- [[Installation] Docker](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md)
|
||||
- [[Installation] Home Assistant](https://github.com/alexbelgium/hassio-addons/tree/master/netalertx)
|
||||
- [[Installation] Bare metal](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md)
|
||||
- [[Installation] Unraid App](https://unraid.net/community/apps)
|
||||
- [[Setup] Usage and Configuration](https://github.com/jokob-sk/NetAlertX/blob/main/docs/README.md)
|
||||
- [[Development] API docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md)
|
||||
- [[Development] Custom Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS_DEV.md)
|
||||
@@ -111,20 +133,20 @@ See [Security Best Practices](https://github.com/jokob-sk/NetAlertX/security) fo
|
||||
|
||||
## ❓ FAQ
|
||||
|
||||
**Q: Why don’t I see any devices?**
|
||||
**Q: Why don’t I see any devices?**
|
||||
A: Ensure the container has proper network access (e.g., use `--network host` on Linux). Also check that your scan method is properly configured in the UI.
|
||||
|
||||
**Q: Does this work on Wi-Fi-only devices like Raspberry Pi?**
|
||||
**Q: Does this work on Wi-Fi-only devices like Raspberry Pi?**
|
||||
A: Yes, but some scanners (e.g. ARP) work best on Ethernet. For Wi-Fi, try SNMP, DHCP, or Pi-hole import.
|
||||
|
||||
**Q: Will this send any data to the internet?**
|
||||
**Q: Will this send any data to the internet?**
|
||||
A: No. All scans and data remain local, unless you set up cloud-based notifications.
|
||||
|
||||
**Q: Can I use this without Docker?**
|
||||
**Q: Can I use this without Docker?**
|
||||
A: Yes! You can install it bare-metal. See the [bare metal installation guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md).
|
||||
|
||||
**Q: Where is the data stored?**
|
||||
A: In the `/config` and `/db` folders, mapped in Docker. Back up these folders regularly.
|
||||
**Q: Where is the data stored?**
|
||||
A: In the `/data/config` and `/data/db` folders. Back up these folders regularly.
|
||||
|
||||
|
||||
## 🐞 Known Issues
|
||||
@@ -141,9 +163,9 @@ Check the [GitHub Issues](https://github.com/jokob-sk/NetAlertX/issues) for the
|
||||
|
||||
### 📧 Get notified what's new
|
||||
|
||||
Get notified about a new release, what new functionality you can use and about breaking changes.
|
||||
Get notified about a new release, what new functionality you can use and about breaking changes.
|
||||
|
||||
![Follow and star][follow_star]
|
||||
![Follow and star][follow_star]
|
||||
|
||||
### 🔀 Other Alternative Apps
|
||||
|
||||
@@ -154,15 +176,15 @@ Get notified about a new release, what new functionality you can use and about b
|
||||
|
||||
### 💙 Donations
|
||||
|
||||
Thank you to everyone who appreciates this tool and donates.
|
||||
Thank you to everyone who appreciates this tool and donates.
|
||||
|
||||
<details>
|
||||
<summary>Click for more ways to donate</summary>
|
||||
|
||||
|
||||
<hr>
|
||||
|
||||
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
|
||||
| --- | --- | --- |
|
||||
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
|
||||
| --- | --- | --- |
|
||||
|
||||
- Bitcoin: `1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM`
|
||||
- Ethereum: `0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7`
|
||||
@@ -173,11 +195,11 @@ Thank you to everyone who appreciates this tool and donates.
|
||||
|
||||
### 🏗 Contributors
|
||||
|
||||
This project would be nothing without the amazing work of the community, with special thanks to:
|
||||
This project would be nothing without the amazing work of the community, with special thanks to:
|
||||
|
||||
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up), [vladaurosh](https://github.com/vladaurosh) for Alpine re-base help, [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few. Check out all the [amazing contributors](https://github.com/jokob-sk/NetAlertX/graphs/contributors).
|
||||
> [pucherot/Pi.Alert](https://github.com/pucherot/Pi.Alert) (the original creator of PiAlert), [leiweibau](https://github.com/leiweibau/Pi.Alert): Dark mode (and much more), [Macleykun](https://github.com/Macleykun) (Help with Dockerfile clean-up), [vladaurosh](https://github.com/vladaurosh) for Alpine re-base help, [Final-Hawk](https://github.com/Final-Hawk) (Help with NTFY, styling and other fixes), [TeroRERO](https://github.com/terorero) (Spanish translations), [Data-Monkey](https://github.com/Data-Monkey), (Split-up of the python.py file and more), [cvc90](https://github.com/cvc90) (Spanish translation and various UI work) to name a few. Check out all the [amazing contributors](https://github.com/jokob-sk/NetAlertX/graphs/contributors).
|
||||
|
||||
### 🌍 Translations
|
||||
### 🌍 Translations
|
||||
|
||||
Proudly using [Weblate](https://hosted.weblate.org/projects/pialert/). Help out and suggest languages in the [online portal of Weblate](https://hosted.weblate.org/projects/pialert/core/).
|
||||
|
||||
|
||||
2
api/.gitignore
vendored
2
api/.gitignore
vendored
@@ -1,2 +0,0 @@
|
||||
*
|
||||
!.gitignore
|
||||
@@ -24,7 +24,7 @@ LOADED_PLUGINS=['ARPSCAN', 'AVAHISCAN', 'CSVBCKP','DBCLNP', 'DIGSCAN', 'INTRNT',
|
||||
|
||||
DAYS_TO_KEEP_EVENTS=90
|
||||
# Used for generating links in emails. Make sure not to add a trailing slash!
|
||||
REPORT_DASHBOARD_URL='http://127.0.0.1'
|
||||
REPORT_DASHBOARD_URL='update_REPORT_DASHBOARD_URL_setting'
|
||||
|
||||
# Make sure at least these scanners are enabled for new installs, other defaults are taken from the config.json
|
||||
INTRNT_RUN='schedule'
|
||||
|
||||
411
back/app.sql
Executable file
411
back/app.sql
Executable file
@@ -0,0 +1,411 @@
|
||||
CREATE TABLE sqlite_stat1(tbl,idx,stat);
|
||||
CREATE TABLE Events (eve_MAC STRING (50) NOT NULL COLLATE NOCASE, eve_IP STRING (50) NOT NULL COLLATE NOCASE, eve_DateTime DATETIME NOT NULL, eve_EventType STRING (30) NOT NULL COLLATE NOCASE, eve_AdditionalInfo STRING (250) DEFAULT (''), eve_PendingAlertEmail BOOLEAN NOT NULL CHECK (eve_PendingAlertEmail IN (0, 1)) DEFAULT (1), eve_PairEventRowid INTEGER);
|
||||
CREATE TABLE Sessions (ses_MAC STRING (50) COLLATE NOCASE, ses_IP STRING (50) COLLATE NOCASE, ses_EventTypeConnection STRING (30) COLLATE NOCASE, ses_DateTimeConnection DATETIME, ses_EventTypeDisconnection STRING (30) COLLATE NOCASE, ses_DateTimeDisconnection DATETIME, ses_StillConnected BOOLEAN, ses_AdditionalInfo STRING (250));
|
||||
CREATE TABLE IF NOT EXISTS "Online_History" (
|
||||
"Index" INTEGER,
|
||||
"Scan_Date" TEXT,
|
||||
"Online_Devices" INTEGER,
|
||||
"Down_Devices" INTEGER,
|
||||
"All_Devices" INTEGER,
|
||||
"Archived_Devices" INTEGER,
|
||||
"Offline_Devices" INTEGER,
|
||||
PRIMARY KEY("Index" AUTOINCREMENT)
|
||||
);
|
||||
CREATE TABLE sqlite_sequence(name,seq);
|
||||
CREATE TABLE Devices (
|
||||
devMac STRING (50) PRIMARY KEY NOT NULL COLLATE NOCASE,
|
||||
devName STRING (50) NOT NULL DEFAULT "(unknown)",
|
||||
devOwner STRING (30) DEFAULT "(unknown)" NOT NULL,
|
||||
devType STRING (30),
|
||||
devVendor STRING (250),
|
||||
devFavorite BOOLEAN CHECK (devFavorite IN (0, 1)) DEFAULT (0) NOT NULL,
|
||||
devGroup STRING (10),
|
||||
devComments TEXT,
|
||||
devFirstConnection DATETIME NOT NULL,
|
||||
devLastConnection DATETIME NOT NULL,
|
||||
devLastIP STRING (50) NOT NULL COLLATE NOCASE,
|
||||
devStaticIP BOOLEAN DEFAULT (0) NOT NULL CHECK (devStaticIP IN (0, 1)),
|
||||
devScan INTEGER DEFAULT (1) NOT NULL,
|
||||
devLogEvents BOOLEAN NOT NULL DEFAULT (1) CHECK (devLogEvents IN (0, 1)),
|
||||
devAlertEvents BOOLEAN NOT NULL DEFAULT (1) CHECK (devAlertEvents IN (0, 1)),
|
||||
devAlertDown BOOLEAN NOT NULL DEFAULT (0) CHECK (devAlertDown IN (0, 1)),
|
||||
devSkipRepeated INTEGER DEFAULT 0 NOT NULL,
|
||||
devLastNotification DATETIME,
|
||||
devPresentLastScan BOOLEAN NOT NULL DEFAULT (0) CHECK (devPresentLastScan IN (0, 1)),
|
||||
devIsNew BOOLEAN NOT NULL DEFAULT (1) CHECK (devIsNew IN (0, 1)),
|
||||
devLocation STRING (250) COLLATE NOCASE,
|
||||
devIsArchived BOOLEAN NOT NULL DEFAULT (0) CHECK (devIsArchived IN (0, 1)),
|
||||
devParentMAC TEXT,
|
||||
devParentPort INTEGER,
|
||||
devIcon TEXT,
|
||||
devGUID TEXT,
|
||||
devSite TEXT,
|
||||
devSSID TEXT,
|
||||
devSyncHubNode TEXT,
|
||||
devSourcePlugin TEXT
|
||||
, "devCustomProps" TEXT);
|
||||
CREATE TABLE IF NOT EXISTS "Settings" (
|
||||
"setKey" TEXT,
|
||||
"setName" TEXT,
|
||||
"setDescription" TEXT,
|
||||
"setType" TEXT,
|
||||
"setOptions" TEXT,
|
||||
"setGroup" TEXT,
|
||||
"setValue" TEXT,
|
||||
"setEvents" TEXT,
|
||||
"setOverriddenByEnv" INTEGER
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS "Parameters" (
|
||||
"par_ID" TEXT PRIMARY KEY,
|
||||
"par_Value" TEXT
|
||||
);
|
||||
CREATE TABLE Plugins_Objects(
|
||||
"Index" INTEGER,
|
||||
Plugin TEXT NOT NULL,
|
||||
Object_PrimaryID TEXT NOT NULL,
|
||||
Object_SecondaryID TEXT NOT NULL,
|
||||
DateTimeCreated TEXT NOT NULL,
|
||||
DateTimeChanged TEXT NOT NULL,
|
||||
Watched_Value1 TEXT NOT NULL,
|
||||
Watched_Value2 TEXT NOT NULL,
|
||||
Watched_Value3 TEXT NOT NULL,
|
||||
Watched_Value4 TEXT NOT NULL,
|
||||
Status TEXT NOT NULL,
|
||||
Extra TEXT NOT NULL,
|
||||
UserData TEXT NOT NULL,
|
||||
ForeignKey TEXT NOT NULL,
|
||||
SyncHubNodeName TEXT,
|
||||
"HelpVal1" TEXT,
|
||||
"HelpVal2" TEXT,
|
||||
"HelpVal3" TEXT,
|
||||
"HelpVal4" TEXT,
|
||||
ObjectGUID TEXT,
|
||||
PRIMARY KEY("Index" AUTOINCREMENT)
|
||||
);
|
||||
CREATE TABLE Plugins_Events(
|
||||
"Index" INTEGER,
|
||||
Plugin TEXT NOT NULL,
|
||||
Object_PrimaryID TEXT NOT NULL,
|
||||
Object_SecondaryID TEXT NOT NULL,
|
||||
DateTimeCreated TEXT NOT NULL,
|
||||
DateTimeChanged TEXT NOT NULL,
|
||||
Watched_Value1 TEXT NOT NULL,
|
||||
Watched_Value2 TEXT NOT NULL,
|
||||
Watched_Value3 TEXT NOT NULL,
|
||||
Watched_Value4 TEXT NOT NULL,
|
||||
Status TEXT NOT NULL,
|
||||
Extra TEXT NOT NULL,
|
||||
UserData TEXT NOT NULL,
|
||||
ForeignKey TEXT NOT NULL,
|
||||
SyncHubNodeName TEXT,
|
||||
"HelpVal1" TEXT,
|
||||
"HelpVal2" TEXT,
|
||||
"HelpVal3" TEXT,
|
||||
"HelpVal4" TEXT, "ObjectGUID" TEXT,
|
||||
PRIMARY KEY("Index" AUTOINCREMENT)
|
||||
);
|
||||
CREATE TABLE Plugins_History(
|
||||
"Index" INTEGER,
|
||||
Plugin TEXT NOT NULL,
|
||||
Object_PrimaryID TEXT NOT NULL,
|
||||
Object_SecondaryID TEXT NOT NULL,
|
||||
DateTimeCreated TEXT NOT NULL,
|
||||
DateTimeChanged TEXT NOT NULL,
|
||||
Watched_Value1 TEXT NOT NULL,
|
||||
Watched_Value2 TEXT NOT NULL,
|
||||
Watched_Value3 TEXT NOT NULL,
|
||||
Watched_Value4 TEXT NOT NULL,
|
||||
Status TEXT NOT NULL,
|
||||
Extra TEXT NOT NULL,
|
||||
UserData TEXT NOT NULL,
|
||||
ForeignKey TEXT NOT NULL,
|
||||
SyncHubNodeName TEXT,
|
||||
"HelpVal1" TEXT,
|
||||
"HelpVal2" TEXT,
|
||||
"HelpVal3" TEXT,
|
||||
"HelpVal4" TEXT, "ObjectGUID" TEXT,
|
||||
PRIMARY KEY("Index" AUTOINCREMENT)
|
||||
);
|
||||
CREATE TABLE Plugins_Language_Strings(
|
||||
"Index" INTEGER,
|
||||
Language_Code TEXT NOT NULL,
|
||||
String_Key TEXT NOT NULL,
|
||||
String_Value TEXT NOT NULL,
|
||||
Extra TEXT NOT NULL,
|
||||
PRIMARY KEY("Index" AUTOINCREMENT)
|
||||
);
|
||||
CREATE TABLE CurrentScan (
|
||||
cur_MAC STRING(50) NOT NULL COLLATE NOCASE,
|
||||
cur_IP STRING(50) NOT NULL COLLATE NOCASE,
|
||||
cur_Vendor STRING(250),
|
||||
cur_ScanMethod STRING(10),
|
||||
cur_Name STRING(250),
|
||||
cur_LastQuery STRING(250),
|
||||
cur_DateTime STRING(250),
|
||||
cur_SyncHubNodeName STRING(50),
|
||||
cur_NetworkSite STRING(250),
|
||||
cur_SSID STRING(250),
|
||||
cur_NetworkNodeMAC STRING(250),
|
||||
cur_PORT STRING(250),
|
||||
cur_Type STRING(250),
|
||||
UNIQUE(cur_MAC)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS "AppEvents" (
|
||||
"Index" INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
"GUID" TEXT UNIQUE,
|
||||
"AppEventProcessed" BOOLEAN,
|
||||
"DateTimeCreated" TEXT,
|
||||
"ObjectType" TEXT,
|
||||
"ObjectGUID" TEXT,
|
||||
"ObjectPlugin" TEXT,
|
||||
"ObjectPrimaryID" TEXT,
|
||||
"ObjectSecondaryID" TEXT,
|
||||
"ObjectForeignKey" TEXT,
|
||||
"ObjectIndex" TEXT,
|
||||
"ObjectIsNew" BOOLEAN,
|
||||
"ObjectIsArchived" BOOLEAN,
|
||||
"ObjectStatusColumn" TEXT,
|
||||
"ObjectStatus" TEXT,
|
||||
"AppEventType" TEXT,
|
||||
"Helper1" TEXT,
|
||||
"Helper2" TEXT,
|
||||
"Helper3" TEXT,
|
||||
"Extra" TEXT
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS "Notifications" (
|
||||
"Index" INTEGER,
|
||||
"GUID" TEXT UNIQUE,
|
||||
"DateTimeCreated" TEXT,
|
||||
"DateTimePushed" TEXT,
|
||||
"Status" TEXT,
|
||||
"JSON" TEXT,
|
||||
"Text" TEXT,
|
||||
"HTML" TEXT,
|
||||
"PublishedVia" TEXT,
|
||||
"Extra" TEXT,
|
||||
PRIMARY KEY("Index" AUTOINCREMENT)
|
||||
);
|
||||
CREATE INDEX IDX_eve_DateTime ON Events (eve_DateTime);
|
||||
CREATE INDEX IDX_eve_EventType ON Events (eve_EventType COLLATE NOCASE);
|
||||
CREATE INDEX IDX_eve_MAC ON Events (eve_MAC COLLATE NOCASE);
|
||||
CREATE INDEX IDX_eve_PairEventRowid ON Events (eve_PairEventRowid);
|
||||
CREATE INDEX IDX_ses_EventTypeDisconnection ON Sessions (ses_EventTypeDisconnection COLLATE NOCASE);
|
||||
CREATE INDEX IDX_ses_EventTypeConnection ON Sessions (ses_EventTypeConnection COLLATE NOCASE);
|
||||
CREATE INDEX IDX_ses_DateTimeDisconnection ON Sessions (ses_DateTimeDisconnection);
|
||||
CREATE INDEX IDX_ses_MAC ON Sessions (ses_MAC COLLATE NOCASE);
|
||||
CREATE INDEX IDX_ses_DateTimeConnection ON Sessions (ses_DateTimeConnection);
|
||||
CREATE INDEX IDX_dev_PresentLastScan ON Devices (devPresentLastScan);
|
||||
CREATE INDEX IDX_dev_FirstConnection ON Devices (devFirstConnection);
|
||||
CREATE INDEX IDX_dev_AlertDeviceDown ON Devices (devAlertDown);
|
||||
CREATE INDEX IDX_dev_StaticIP ON Devices (devStaticIP);
|
||||
CREATE INDEX IDX_dev_ScanCycle ON Devices (devScan);
|
||||
CREATE INDEX IDX_dev_Favorite ON Devices (devFavorite);
|
||||
CREATE INDEX IDX_dev_LastIP ON Devices (devLastIP);
|
||||
CREATE INDEX IDX_dev_NewDevice ON Devices (devIsNew);
|
||||
CREATE INDEX IDX_dev_Archived ON Devices (devIsArchived);
|
||||
CREATE VIEW Events_Devices AS
|
||||
SELECT *
|
||||
FROM Events
|
||||
LEFT JOIN Devices ON eve_MAC = devMac
|
||||
/* Events_Devices(eve_MAC,eve_IP,eve_DateTime,eve_EventType,eve_AdditionalInfo,eve_PendingAlertEmail,eve_PairEventRowid,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps) */;
|
||||
CREATE VIEW LatestEventsPerMAC AS
|
||||
WITH RankedEvents AS (
|
||||
SELECT
|
||||
e.*,
|
||||
ROW_NUMBER() OVER (PARTITION BY e.eve_MAC ORDER BY e.eve_DateTime DESC) AS row_num
|
||||
FROM Events AS e
|
||||
)
|
||||
SELECT
|
||||
e.*,
|
||||
d.*,
|
||||
c.*
|
||||
FROM RankedEvents AS e
|
||||
LEFT JOIN Devices AS d ON e.eve_MAC = d.devMac
|
||||
INNER JOIN CurrentScan AS c ON e.eve_MAC = c.cur_MAC
|
||||
WHERE e.row_num = 1
|
||||
/* LatestEventsPerMAC(eve_MAC,eve_IP,eve_DateTime,eve_EventType,eve_AdditionalInfo,eve_PendingAlertEmail,eve_PairEventRowid,row_num,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps,cur_MAC,cur_IP,cur_Vendor,cur_ScanMethod,cur_Name,cur_LastQuery,cur_DateTime,cur_SyncHubNodeName,cur_NetworkSite,cur_SSID,cur_NetworkNodeMAC,cur_PORT,cur_Type) */;
|
||||
CREATE VIEW Sessions_Devices AS SELECT * FROM Sessions LEFT JOIN "Devices" ON ses_MAC = devMac
|
||||
/* Sessions_Devices(ses_MAC,ses_IP,ses_EventTypeConnection,ses_DateTimeConnection,ses_EventTypeDisconnection,ses_DateTimeDisconnection,ses_StillConnected,ses_AdditionalInfo,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps) */;
|
||||
CREATE VIEW Convert_Events_to_Sessions AS SELECT EVE1.eve_MAC,
|
||||
EVE1.eve_IP,
|
||||
EVE1.eve_EventType AS eve_EventTypeConnection,
|
||||
EVE1.eve_DateTime AS eve_DateTimeConnection,
|
||||
CASE WHEN EVE2.eve_EventType IN ('Disconnected', 'Device Down') OR
|
||||
EVE2.eve_EventType IS NULL THEN EVE2.eve_EventType ELSE '<missing event>' END AS eve_EventTypeDisconnection,
|
||||
CASE WHEN EVE2.eve_EventType IN ('Disconnected', 'Device Down') THEN EVE2.eve_DateTime ELSE NULL END AS eve_DateTimeDisconnection,
|
||||
CASE WHEN EVE2.eve_EventType IS NULL THEN 1 ELSE 0 END AS eve_StillConnected,
|
||||
EVE1.eve_AdditionalInfo
|
||||
FROM Events AS EVE1
|
||||
LEFT JOIN
|
||||
Events AS EVE2 ON EVE1.eve_PairEventRowID = EVE2.RowID
|
||||
WHERE EVE1.eve_EventType IN ('New Device', 'Connected','Down Reconnected')
|
||||
UNION
|
||||
SELECT eve_MAC,
|
||||
eve_IP,
|
||||
'<missing event>' AS eve_EventTypeConnection,
|
||||
NULL AS eve_DateTimeConnection,
|
||||
eve_EventType AS eve_EventTypeDisconnection,
|
||||
eve_DateTime AS eve_DateTimeDisconnection,
|
||||
0 AS eve_StillConnected,
|
||||
eve_AdditionalInfo
|
||||
FROM Events AS EVE1
|
||||
WHERE (eve_EventType = 'Device Down' OR
|
||||
eve_EventType = 'Disconnected') AND
|
||||
EVE1.eve_PairEventRowID IS NULL
|
||||
/* Convert_Events_to_Sessions(eve_MAC,eve_IP,eve_EventTypeConnection,eve_DateTimeConnection,eve_EventTypeDisconnection,eve_DateTimeDisconnection,eve_StillConnected,eve_AdditionalInfo) */;
|
||||
CREATE TRIGGER "trg_insert_devices"
|
||||
AFTER INSERT ON "Devices"
|
||||
WHEN NOT EXISTS (
|
||||
SELECT 1 FROM AppEvents
|
||||
WHERE AppEventProcessed = 0
|
||||
AND ObjectType = 'Devices'
|
||||
AND ObjectGUID = NEW.devGUID
|
||||
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
|
||||
AND AppEventType = 'insert'
|
||||
)
|
||||
BEGIN
|
||||
INSERT INTO "AppEvents" (
|
||||
"GUID",
|
||||
"DateTimeCreated",
|
||||
"AppEventProcessed",
|
||||
"ObjectType",
|
||||
"ObjectGUID",
|
||||
"ObjectPrimaryID",
|
||||
"ObjectSecondaryID",
|
||||
"ObjectStatus",
|
||||
"ObjectStatusColumn",
|
||||
"ObjectIsNew",
|
||||
"ObjectIsArchived",
|
||||
"ObjectForeignKey",
|
||||
"ObjectPlugin",
|
||||
"AppEventType"
|
||||
)
|
||||
VALUES (
|
||||
|
||||
lower(
|
||||
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
|
||||
substr(hex( randomblob(2)), 2) || '-' ||
|
||||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
|
||||
substr(hex(randomblob(2)), 2) || '-' ||
|
||||
hex(randomblob(6))
|
||||
)
|
||||
,
|
||||
DATETIME('now'),
|
||||
FALSE,
|
||||
'Devices',
|
||||
NEW.devGUID, -- ObjectGUID
|
||||
NEW.devMac, -- ObjectPrimaryID
|
||||
NEW.devLastIP, -- ObjectSecondaryID
|
||||
CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
|
||||
'devPresentLastScan', -- ObjectStatusColumn
|
||||
NEW.devIsNew, -- ObjectIsNew
|
||||
NEW.devIsArchived, -- ObjectIsArchived
|
||||
NEW.devGUID, -- ObjectForeignKey
|
||||
'DEVICES', -- ObjectForeignKey
|
||||
'insert'
|
||||
);
|
||||
END;
|
||||
CREATE TRIGGER "trg_update_devices"
|
||||
AFTER UPDATE ON "Devices"
|
||||
WHEN NOT EXISTS (
|
||||
SELECT 1 FROM AppEvents
|
||||
WHERE AppEventProcessed = 0
|
||||
AND ObjectType = 'Devices'
|
||||
AND ObjectGUID = NEW.devGUID
|
||||
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
|
||||
AND AppEventType = 'update'
|
||||
)
|
||||
BEGIN
|
||||
INSERT INTO "AppEvents" (
|
||||
"GUID",
|
||||
"DateTimeCreated",
|
||||
"AppEventProcessed",
|
||||
"ObjectType",
|
||||
"ObjectGUID",
|
||||
"ObjectPrimaryID",
|
||||
"ObjectSecondaryID",
|
||||
"ObjectStatus",
|
||||
"ObjectStatusColumn",
|
||||
"ObjectIsNew",
|
||||
"ObjectIsArchived",
|
||||
"ObjectForeignKey",
|
||||
"ObjectPlugin",
|
||||
"AppEventType"
|
||||
)
|
||||
VALUES (
|
||||
|
||||
lower(
|
||||
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
|
||||
substr(hex( randomblob(2)), 2) || '-' ||
|
||||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
|
||||
substr(hex(randomblob(2)), 2) || '-' ||
|
||||
hex(randomblob(6))
|
||||
)
|
||||
,
|
||||
DATETIME('now'),
|
||||
FALSE,
|
||||
'Devices',
|
||||
NEW.devGUID, -- ObjectGUID
|
||||
NEW.devMac, -- ObjectPrimaryID
|
||||
NEW.devLastIP, -- ObjectSecondaryID
|
||||
CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
|
||||
'devPresentLastScan', -- ObjectStatusColumn
|
||||
NEW.devIsNew, -- ObjectIsNew
|
||||
NEW.devIsArchived, -- ObjectIsArchived
|
||||
NEW.devGUID, -- ObjectForeignKey
|
||||
'DEVICES', -- ObjectForeignKey
|
||||
'update'
|
||||
);
|
||||
END;
|
||||
CREATE TRIGGER "trg_delete_devices"
|
||||
AFTER DELETE ON "Devices"
|
||||
WHEN NOT EXISTS (
|
||||
SELECT 1 FROM AppEvents
|
||||
WHERE AppEventProcessed = 0
|
||||
AND ObjectType = 'Devices'
|
||||
AND ObjectGUID = OLD.devGUID
|
||||
AND ObjectStatus = CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
|
||||
AND AppEventType = 'delete'
|
||||
)
|
||||
BEGIN
|
||||
INSERT INTO "AppEvents" (
|
||||
"GUID",
|
||||
"DateTimeCreated",
|
||||
"AppEventProcessed",
|
||||
"ObjectType",
|
||||
"ObjectGUID",
|
||||
"ObjectPrimaryID",
|
||||
"ObjectSecondaryID",
|
||||
"ObjectStatus",
|
||||
"ObjectStatusColumn",
|
||||
"ObjectIsNew",
|
||||
"ObjectIsArchived",
|
||||
"ObjectForeignKey",
|
||||
"ObjectPlugin",
|
||||
"AppEventType"
|
||||
)
|
||||
VALUES (
|
||||
|
||||
lower(
|
||||
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
|
||||
substr(hex( randomblob(2)), 2) || '-' ||
|
||||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
|
||||
substr(hex(randomblob(2)), 2) || '-' ||
|
||||
hex(randomblob(6))
|
||||
)
|
||||
,
|
||||
DATETIME('now'),
|
||||
FALSE,
|
||||
'Devices',
|
||||
OLD.devGUID, -- ObjectGUID
|
||||
OLD.devMac, -- ObjectPrimaryID
|
||||
OLD.devLastIP, -- ObjectSecondaryID
|
||||
CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
|
||||
'devPresentLastScan', -- ObjectStatusColumn
|
||||
OLD.devIsNew, -- ObjectIsNew
|
||||
OLD.devIsArchived, -- ObjectIsArchived
|
||||
OLD.devGUID, -- ObjectForeignKey
|
||||
'DEVICES', -- ObjectForeignKey
|
||||
'delete'
|
||||
);
|
||||
END;
|
||||
@@ -1,14 +1,17 @@
|
||||
#!/bin/bash
|
||||
export INSTALL_DIR=/app
|
||||
|
||||
LOG_FILE="${INSTALL_DIR}/log/execution_queue.log"
|
||||
|
||||
# Check if there are any entries with cron_restart_backend
|
||||
if grep -q "cron_restart_backend" "$LOG_FILE"; then
|
||||
# Restart python application using s6
|
||||
s6-svc -r /var/run/s6-rc/servicedirs/netalertx
|
||||
echo 'done'
|
||||
if [ -f "${LOG_EXECUTION_QUEUE}" ] && grep -q "cron_restart_backend" "${LOG_EXECUTION_QUEUE}"; then
|
||||
echo "$(date): Restarting backend triggered by cron_restart_backend"
|
||||
killall python3 || echo "killall python3 failed or no process found"
|
||||
sleep 2
|
||||
/services/start-backend.sh &
|
||||
|
||||
# Remove all lines containing cron_restart_backend from the log file
|
||||
sed -i '/cron_restart_backend/d' "$LOG_FILE"
|
||||
# Atomic replacement with temp file. grep returns 1 if no lines selected (file becomes empty), which is valid here.
|
||||
grep -v "cron_restart_backend" "${LOG_EXECUTION_QUEUE}" > "${LOG_EXECUTION_QUEUE}.tmp"
|
||||
RC=$?
|
||||
if [ $RC -eq 0 ] || [ $RC -eq 1 ]; then
|
||||
mv "${LOG_EXECUTION_QUEUE}.tmp" "${LOG_EXECUTION_QUEUE}"
|
||||
fi
|
||||
fi
|
||||
|
||||
200
back/device_heuristics_rules.json
Executable file
200
back/device_heuristics_rules.json
Executable file
@@ -0,0 +1,200 @@
|
||||
[
|
||||
{
|
||||
"dev_type": "Gateway",
|
||||
"icon_html": "<i class=\"fa fa-globe\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "INTERNET", "vendor": "" }
|
||||
],
|
||||
"name_pattern": []
|
||||
},
|
||||
{
|
||||
"dev_type": "Access Point",
|
||||
"icon_html": "<i class=\"fa fa-network-wired\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "74ACB9", "vendor": "Ubiquiti" },
|
||||
{ "mac_prefix": "002468", "vendor": "Cisco" },
|
||||
{ "mac_prefix": "F4F5D8", "vendor": "TP-Link" },
|
||||
{ "mac_prefix": "F88E85", "vendor": "Netgear" }
|
||||
],
|
||||
"name_pattern": ["router", "gateway", "ap", "access point", "access-point", "switch"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Phone",
|
||||
"icon_html": "<i class=\"fa-brands fa-apple\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "001A79", "vendor": "Apple" },
|
||||
{ "mac_prefix": "B0BE83", "vendor": "Samsung" },
|
||||
{ "mac_prefix": "BC926B", "vendor": "Motorola" }
|
||||
],
|
||||
"name_pattern": ["iphone", "ipad", "pixel", "galaxy", "redmi"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Phone",
|
||||
"icon_html": "<i class=\"fa-solid fa-mobile\"></i>",
|
||||
"matching_pattern": [
|
||||
],
|
||||
"name_pattern": ["android","samsung"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Tablet",
|
||||
"icon_html": "<i class=\"fa fa-tablet\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "001B63", "vendor": "Apple" },
|
||||
{ "mac_prefix": "BC4C4C", "vendor": "Samsung" }
|
||||
],
|
||||
"name_pattern": ["tablet", "pad"]
|
||||
},
|
||||
{
|
||||
"dev_type": "IoT",
|
||||
"icon_html": "<i class=\"fa-brands fa-raspberry-pi\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "B827EB", "vendor": "Raspberry Pi" },
|
||||
{ "mac_prefix": "DCA632", "vendor": "Raspberry Pi" }
|
||||
],
|
||||
"name_pattern": ["raspberry", "pi"]
|
||||
},
|
||||
{
|
||||
"dev_type": "IoT",
|
||||
"icon_html": "<i class=\"fa-solid fa-microchip\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "840D8E", "vendor": "Espressif" },
|
||||
{ "mac_prefix": "ECFABC", "vendor": "Espressif" },
|
||||
{ "mac_prefix": "7C9EBD", "vendor": "Espressif" }
|
||||
],
|
||||
"name_pattern": ["raspberry", "pi"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Desktop",
|
||||
"icon_html": "<i class=\"fa fa-desktop\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "001422", "vendor": "Dell" },
|
||||
{ "mac_prefix": "001874", "vendor": "Lenovo" },
|
||||
{ "mac_prefix": "00E04C", "vendor": "Hewlett Packard" }
|
||||
],
|
||||
"name_pattern": ["desktop", "pc", "computer"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Laptop",
|
||||
"icon_html": "<i class=\"fa fa-laptop\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "3C0754", "vendor": "HP" },
|
||||
{ "mac_prefix": "0017A4", "vendor": "Dell" },
|
||||
{ "mac_prefix": "F4CE46", "vendor": "Lenovo" },
|
||||
{ "mac_prefix": "409F38", "vendor": "Acer" }
|
||||
],
|
||||
"name_pattern": ["macbook", "imac", "laptop", "notebook"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Server",
|
||||
"icon_html": "<i class=\"fa fa-server\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "001CBF", "vendor": "Supermicro" },
|
||||
{ "mac_prefix": "002186", "vendor": "Dell" },
|
||||
{ "mac_prefix": "D02788", "vendor": "Hewlett Packard" },
|
||||
{ "mac_prefix": "002590", "vendor": "IBM" }
|
||||
],
|
||||
"name_pattern": ["server", "nas"]
|
||||
},
|
||||
{
|
||||
"dev_type": "VM",
|
||||
"icon_html": "<i class=\"fa fa-server\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "525400", "vendor": "QEMU" },
|
||||
{ "mac_prefix": "005056", "vendor": "VMware" },
|
||||
{ "mac_prefix": "000C29", "vendor": "VMware" },
|
||||
{ "mac_prefix": "000569", "vendor": "VMware" },
|
||||
{ "mac_prefix": "00163E", "vendor": "Xen" },
|
||||
{ "mac_prefix": "080027", "vendor": "VirtualBox" }
|
||||
]
|
||||
},
|
||||
{
|
||||
"dev_type": "TV",
|
||||
"icon_html": "<i class=\"fa fa-tv\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "0013CE", "vendor": "Samsung" },
|
||||
{ "mac_prefix": "0017C8", "vendor": "LG" },
|
||||
{ "mac_prefix": "D46E0E", "vendor": "Sony" }
|
||||
],
|
||||
"name_pattern": ["tv", "television", "smarttv"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Gaming Console",
|
||||
"icon_html": "<i class=\"fa fa-gamepad\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "001FA7", "vendor": "Sony" },
|
||||
{ "mac_prefix": "7C04D0", "vendor": "Nintendo" },
|
||||
{ "mac_prefix": "EC26CA", "vendor": "Sony" }
|
||||
],
|
||||
"name_pattern": ["playstation", "xbox"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Camera",
|
||||
"icon_html": "<i class=\"fa fa-camera\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "A45E60", "vendor": "Hikvision" },
|
||||
{ "mac_prefix": "00408C", "vendor": "Axis" },
|
||||
{ "mac_prefix": "00156D", "vendor": "Amcrest" },
|
||||
{ "mac_prefix": "AC9E17", "vendor": "Reolink" }
|
||||
],
|
||||
"name_pattern": ["camera", "cam", "webcam"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Smart Speaker",
|
||||
"icon_html": "<i class=\"fa fa-volume-up\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "44650D", "vendor": "Amazon" },
|
||||
{ "mac_prefix": "74ACB9", "vendor": "Google" }
|
||||
],
|
||||
"name_pattern": ["echo", "alexa", "dot"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Router",
|
||||
"icon_html": "<i class=\"fa fa-random\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "000C29", "vendor": "Cisco" },
|
||||
{ "mac_prefix": "00155D", "vendor": "MikroTik" }
|
||||
],
|
||||
"name_pattern": ["router", "gateway", "ap", "access point", "access-point"],
|
||||
"ip_pattern": [
|
||||
"^192\\.168\\.[0-1]\\.1$",
|
||||
"^10\\.0\\.0\\.1$"
|
||||
]
|
||||
},
|
||||
{
|
||||
"dev_type": "Smart Light",
|
||||
"icon_html": "<i class=\"fa fa-lightbulb\"></i>",
|
||||
"matching_pattern": [],
|
||||
"name_pattern": ["hue", "lifx", "bulb"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Smart Home",
|
||||
"icon_html": "<i class=\"fa fa-house\"></i>",
|
||||
"matching_pattern": [],
|
||||
"name_pattern": ["google", "chromecast", "nest"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Smartwatch",
|
||||
"icon_html": "<i class=\"fa fa-watch\"></i>",
|
||||
"matching_pattern": [],
|
||||
"name_pattern": ["watch", "wear"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Printer",
|
||||
"icon_html": "<i class=\"fa fa-print\"></i>",
|
||||
"matching_pattern": [],
|
||||
"name_pattern": ["printer", "print"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Security Device",
|
||||
"icon_html": "<i class=\"fa fa-shield-alt\"></i>",
|
||||
"matching_pattern": [],
|
||||
"name_pattern": ["doorbell", "lock", "security"]
|
||||
},
|
||||
{
|
||||
"dev_type": "Smart Light",
|
||||
"icon_html": "<i class=\"fa-solid fa-lightbulb\"></i>",
|
||||
"matching_pattern": [
|
||||
],
|
||||
"name_pattern": ["light","bulb"]
|
||||
}
|
||||
]
|
||||
111367
back/ieee-oui.txt
Executable file
111367
back/ieee-oui.txt
Executable file
File diff suppressed because it is too large
Load Diff
2013
back/speedtest-cli
2013
back/speedtest-cli
File diff suppressed because it is too large
Load Diff
2
db/.gitignore
vendored
2
db/.gitignore
vendored
@@ -1,2 +0,0 @@
|
||||
*
|
||||
!.gitignore
|
||||
@@ -1,79 +1,75 @@
|
||||
services:
|
||||
netalertx:
|
||||
privileged: true
|
||||
#use an environmental variable to set host networking mode if needed
|
||||
network_mode: ${NETALERTX_NETWORK_MODE:-host} # Use host networking for ARP scanning and other services
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
context: .
|
||||
cache_from:
|
||||
- type=registry,ref=docker.io/jokob-sk/netalertx:buildcache
|
||||
container_name: netalertx
|
||||
network_mode: host
|
||||
# restart: unless-stopped
|
||||
context: . # Build context is the current directory
|
||||
dockerfile: Dockerfile # Specify the Dockerfile to use
|
||||
image: netalertx:latest
|
||||
container_name: netalertx # The name when you docker contiainer ls
|
||||
read_only: true # Make the container filesystem read-only
|
||||
cap_drop: # Drop all capabilities for enhanced security
|
||||
- ALL
|
||||
cap_add: # Add only the necessary capabilities
|
||||
- NET_ADMIN # Required for ARP scanning
|
||||
- NET_RAW # Required for raw socket operations
|
||||
- NET_BIND_SERVICE # Required to bind to privileged ports (nbtscan)
|
||||
|
||||
volumes:
|
||||
# - ${APP_DATA_LOCATION}/netalertx_dev/config:/app/config
|
||||
- ${APP_DATA_LOCATION}/netalertx/config:/app/config
|
||||
# - ${APP_DATA_LOCATION}/netalertx_dev/db:/app/db
|
||||
- ${APP_DATA_LOCATION}/netalertx/db:/app/db
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- ${APP_DATA_LOCATION}/netalertx/log:/app/log
|
||||
# (API: OPTION 1) use for performance
|
||||
- type: tmpfs
|
||||
target: /app/api
|
||||
# (API: OPTION 2) use when debugging issues
|
||||
# - ${DEV_LOCATION}/api:/app/api
|
||||
# ---------------------------------------------------------------------------
|
||||
# DELETE START anyone trying to use this file: comment out / delete BELOW lines, they are only for development purposes
|
||||
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/dhcp1.leases:/mnt/dhcp1.leases # test data for DCPLSS plugin
|
||||
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/dhcp2.leases:/mnt/dhcp2.leases # test data for DCPLSS plugin
|
||||
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/pihole_dhcp_full.leases:/etc/pihole/dhcp.leases # test data for DCPLSS plugin
|
||||
- ${APP_DATA_LOCATION}/netalertx/dhcp_samples/pihole_dhcp_2.leases:/etc/pihole/dhcp2.leases # test data for DCPLSS plugin
|
||||
- ${APP_DATA_LOCATION}/pihole/etc-pihole/pihole-FTL.db:/etc/pihole/pihole-FTL.db # test data for PIHOLE plugin
|
||||
- ${DEV_LOCATION}/mkdocs.yml:/app/mkdocs.yml
|
||||
- ${DEV_LOCATION}/docs:/app/docs
|
||||
- ${DEV_LOCATION}/server:/app/server
|
||||
- ${DEV_LOCATION}/test:/app/test
|
||||
- ${DEV_LOCATION}/dockerfiles:/app/dockerfiles
|
||||
# - ${APP_DATA_LOCATION}/netalertx/php.ini:/etc/php/8.2/fpm/php.ini
|
||||
- ${DEV_LOCATION}/install:/app/install
|
||||
- ${DEV_LOCATION}/front/css:/app/front/css
|
||||
- ${DEV_LOCATION}/front/img:/app/front/img
|
||||
- ${DEV_LOCATION}/back/update_vendors.sh:/app/back/update_vendors.sh
|
||||
- ${DEV_LOCATION}/front/lib:/app/front/lib
|
||||
- ${DEV_LOCATION}/front/js:/app/front/js
|
||||
- ${DEV_LOCATION}/front/php:/app/front/php
|
||||
- ${DEV_LOCATION}/front/deviceDetails.php:/app/front/deviceDetails.php
|
||||
- ${DEV_LOCATION}/front/deviceDetailsEdit.php:/app/front/deviceDetailsEdit.php
|
||||
- ${DEV_LOCATION}/front/userNotifications.php:/app/front/userNotifications.php
|
||||
- ${DEV_LOCATION}/front/deviceDetailsTools.php:/app/front/deviceDetailsTools.php
|
||||
- ${DEV_LOCATION}/front/deviceDetailsPresence.php:/app/front/deviceDetailsPresence.php
|
||||
- ${DEV_LOCATION}/front/deviceDetailsSessions.php:/app/front/deviceDetailsSessions.php
|
||||
- ${DEV_LOCATION}/front/deviceDetailsEvents.php:/app/front/deviceDetailsEvents.php
|
||||
- ${DEV_LOCATION}/front/devices.php:/app/front/devices.php
|
||||
- ${DEV_LOCATION}/front/events.php:/app/front/events.php
|
||||
- ${DEV_LOCATION}/front/plugins.php:/app/front/plugins.php
|
||||
- ${DEV_LOCATION}/front/pluginsCore.php:/app/front/pluginsCore.php
|
||||
- ${DEV_LOCATION}/front/index.php:/app/front/index.php
|
||||
- ${DEV_LOCATION}/front/initCheck.php:/app/front/initCheck.php
|
||||
- ${DEV_LOCATION}/front/maintenance.php:/app/front/maintenance.php
|
||||
- ${DEV_LOCATION}/front/network.php:/app/front/network.php
|
||||
- ${DEV_LOCATION}/front/presence.php:/app/front/presence.php
|
||||
- ${DEV_LOCATION}/front/settings.php:/app/front/settings.php
|
||||
- ${DEV_LOCATION}/front/systeminfo.php:/app/front/systeminfo.php
|
||||
- ${DEV_LOCATION}/front/cloud_services.php:/app/front/cloud_services.php
|
||||
- ${DEV_LOCATION}/front/report.php:/app/front/report.php
|
||||
- ${DEV_LOCATION}/front/workflows.php:/app/front/workflows.php
|
||||
- ${DEV_LOCATION}/front/workflowsCore.php:/app/front/workflowsCore.php
|
||||
- ${DEV_LOCATION}/front/appEvents.php:/app/front/appEvents.php
|
||||
- ${DEV_LOCATION}/front/appEventsCore.php:/app/front/appEventsCore.php
|
||||
- ${DEV_LOCATION}/front/multiEditCore.php:/app/front/multiEditCore.php
|
||||
- ${DEV_LOCATION}/front/plugins:/app/front/plugins
|
||||
# DELETE END anyone trying to use this file: comment out / delete ABOVE lines, they are only for development purposes
|
||||
# ---------------------------------------------------------------------------
|
||||
environment:
|
||||
# - APP_CONF_OVERRIDE={"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20223","UI_theme":"Light"}
|
||||
- TZ=${TZ}
|
||||
- PORT=${PORT}
|
||||
# ❗ DANGER ZONE BELOW - Setting ALWAYS_FRESH_INSTALL=true will delete the content of the /db & /config folders
|
||||
- ALWAYS_FRESH_INSTALL=${ALWAYS_FRESH_INSTALL}
|
||||
# - LOADED_PLUGINS=["DHCPLSS","PIHOLE","ASUSWRT","FREEBOX"]
|
||||
|
||||
|
||||
- type: volume # Persistent Docker-managed Named Volume for storage
|
||||
source: netalertx_data # the default name of the volume is netalertx_data
|
||||
target: /data # consolidated configuration and database storage
|
||||
read_only: false # writable volume
|
||||
|
||||
# Example custom local folder called /home/user/netalertx_data
|
||||
# - type: bind
|
||||
# source: /home/user/netalertx_data
|
||||
# target: /data
|
||||
# read_only: false
|
||||
# ... or use the alternative format
|
||||
# - /home/user/netalertx_data:/data:rw
|
||||
|
||||
- type: bind # Bind mount for timezone consistency
|
||||
source: /etc/localtime
|
||||
target: /etc/localtime
|
||||
read_only: true
|
||||
|
||||
# Use a custom Enterprise-configured nginx config for ldap or other settings
|
||||
# - /custom-enterprise.conf:/tmp/nginx/active-config/netalertx.conf:ro
|
||||
|
||||
# Test your plugin on the production container
|
||||
# - /path/on/host:/app/front/plugins/custom
|
||||
|
||||
# Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts
|
||||
# - /path/on/host/log:/tmp/log
|
||||
|
||||
# tmpfs mounts for writable directories in a read-only container and improve system performance
|
||||
# All writes now live under /tmp/* subdirectories which are created dynamically by entrypoint.d scripts
|
||||
# uid=20211 and gid=20211 is the netalertx user inside the container
|
||||
# mode=1700 gives rwx------ permissions to the netalertx user only
|
||||
tmpfs:
|
||||
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
environment:
|
||||
LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0} # Listen for connections on all interfaces
|
||||
PORT: ${PORT:-20211} # Application port
|
||||
GRAPHQL_PORT: ${GRAPHQL_PORT:-20212} # GraphQL API port
|
||||
ALWAYS_FRESH_INSTALL: ${ALWAYS_FRESH_INSTALL:-false} # Set to true to reset your config and database on each container start
|
||||
NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0} # 0=kill all services and restart if any dies. 1 keeps running dead services.
|
||||
|
||||
# Resource limits to prevent resource exhaustion
|
||||
mem_limit: 2048m # Maximum memory usage
|
||||
mem_reservation: 1024m # Soft memory limit
|
||||
cpu_shares: 512 # Relative CPU weight for CPU contention scenarios
|
||||
pids_limit: 512 # Limit the number of processes/threads to prevent fork bombs
|
||||
logging:
|
||||
driver: "json-file" # Use JSON file logging driver
|
||||
options:
|
||||
max-size: "10m" # Rotate log files after they reach 10MB
|
||||
max-file: "3" # Keep a maximum of 3 log files
|
||||
|
||||
# Always restart the container unless explicitly stopped
|
||||
restart: unless-stopped
|
||||
|
||||
volumes: # Persistent volume for configuration and database storage
|
||||
netalertx_data:
|
||||
|
||||
534
docker_build.log
Executable file
534
docker_build.log
Executable file
@@ -0,0 +1,534 @@
|
||||
#0 building with "default" instance using docker driver
|
||||
|
||||
#1 [internal] load build definition from Dockerfile
|
||||
#1 transferring dockerfile: 5.29kB done
|
||||
#1 DONE 0.1s
|
||||
|
||||
#2 [auth] library/alpine:pull token for registry-1.docker.io
|
||||
#2 DONE 0.0s
|
||||
|
||||
#3 [internal] load metadata for docker.io/library/alpine:3.22
|
||||
#3 DONE 0.4s
|
||||
|
||||
#4 [internal] load .dockerignore
|
||||
#4 transferring context: 216B done
|
||||
#4 DONE 0.1s
|
||||
|
||||
#5 [builder 1/15] FROM docker.io/library/alpine:3.22@sha256:4bcff63911fcb4448bd4fdacec207030997caf25e9bea4045fa6c8c44de311d1
|
||||
#5 CACHED
|
||||
|
||||
#6 [internal] load build context
|
||||
#6 transferring context: 36.76kB 0.0s done
|
||||
#6 DONE 0.1s
|
||||
|
||||
#7 [builder 2/15] RUN apk add --no-cache bash shadow python3 python3-dev gcc musl-dev libffi-dev openssl-dev git && python -m venv /opt/venv
|
||||
#7 0.443 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
|
||||
#7 0.688 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
|
||||
#7 1.107 (1/52) Upgrading libcrypto3 (3.5.1-r0 -> 3.5.3-r0)
|
||||
#7 1.358 (2/52) Upgrading libssl3 (3.5.1-r0 -> 3.5.3-r0)
|
||||
#7 1.400 (3/52) Installing ncurses-terminfo-base (6.5_p20250503-r0)
|
||||
#7 1.413 (4/52) Installing libncursesw (6.5_p20250503-r0)
|
||||
#7 1.444 (5/52) Installing readline (8.2.13-r1)
|
||||
#7 1.471 (6/52) Installing bash (5.2.37-r0)
|
||||
#7 1.570 Executing bash-5.2.37-r0.post-install
|
||||
#7 1.593 (7/52) Installing libgcc (14.2.0-r6)
|
||||
#7 1.605 (8/52) Installing jansson (2.14.1-r0)
|
||||
#7 1.613 (9/52) Installing libstdc++ (14.2.0-r6)
|
||||
#7 1.705 (10/52) Installing zstd-libs (1.5.7-r0)
|
||||
#7 1.751 (11/52) Installing binutils (2.44-r3)
|
||||
#7 2.041 (12/52) Installing libgomp (14.2.0-r6)
|
||||
#7 2.064 (13/52) Installing libatomic (14.2.0-r6)
|
||||
#7 2.071 (14/52) Installing gmp (6.3.0-r3)
|
||||
#7 2.097 (15/52) Installing isl26 (0.26-r1)
|
||||
#7 2.183 (16/52) Installing mpfr4 (4.2.1_p1-r0)
|
||||
#7 2.219 (17/52) Installing mpc1 (1.3.1-r1)
|
||||
#7 2.231 (18/52) Installing gcc (14.2.0-r6)
|
||||
#7 6.782 (19/52) Installing brotli-libs (1.1.0-r2)
|
||||
#7 6.828 (20/52) Installing c-ares (1.34.5-r0)
|
||||
#7 6.846 (21/52) Installing libunistring (1.3-r0)
|
||||
#7 6.919 (22/52) Installing libidn2 (2.3.7-r0)
|
||||
#7 6.937 (23/52) Installing nghttp2-libs (1.65.0-r0)
|
||||
#7 6.950 (24/52) Installing libpsl (0.21.5-r3)
|
||||
#7 6.960 (25/52) Installing libcurl (8.14.1-r1)
|
||||
#7 7.015 (26/52) Installing libexpat (2.7.2-r0)
|
||||
#7 7.029 (27/52) Installing pcre2 (10.43-r1)
|
||||
#7 7.069 (28/52) Installing git (2.49.1-r0)
|
||||
#7 7.397 (29/52) Installing git-init-template (2.49.1-r0)
|
||||
#7 7.404 (30/52) Installing linux-headers (6.14.2-r0)
|
||||
#7 7.572 (31/52) Installing libffi (3.4.8-r0)
|
||||
#7 7.578 (32/52) Installing pkgconf (2.4.3-r0)
|
||||
#7 7.593 (33/52) Installing libffi-dev (3.4.8-r0)
|
||||
#7 7.607 (34/52) Installing musl-dev (1.2.5-r10)
|
||||
#7 7.961 (35/52) Installing openssl-dev (3.5.3-r0)
|
||||
#7 8.021 (36/52) Installing libbz2 (1.0.8-r6)
|
||||
#7 8.045 (37/52) Installing gdbm (1.24-r0)
|
||||
#7 8.055 (38/52) Installing xz-libs (5.8.1-r0)
|
||||
#7 8.071 (39/52) Installing mpdecimal (4.0.1-r0)
|
||||
#7 8.090 (40/52) Installing libpanelw (6.5_p20250503-r0)
|
||||
#7 8.098 (41/52) Installing sqlite-libs (3.49.2-r1)
|
||||
#7 8.185 (42/52) Installing python3 (3.12.11-r0)
|
||||
#7 8.904 (43/52) Installing python3-pycache-pyc0 (3.12.11-r0)
|
||||
#7 9.292 (44/52) Installing pyc (3.12.11-r0)
|
||||
#7 9.292 (45/52) Installing python3-pyc (3.12.11-r0)
|
||||
#7 9.292 (46/52) Installing python3-dev (3.12.11-r0)
|
||||
#7 10.71 (47/52) Installing libmd (1.1.0-r0)
|
||||
#7 10.72 (48/52) Installing libbsd (0.12.2-r0)
|
||||
#7 10.73 (49/52) Installing skalibs-libs (2.14.4.0-r0)
|
||||
#7 10.75 (50/52) Installing utmps-libs (0.1.3.1-r0)
|
||||
#7 10.76 (51/52) Installing linux-pam (1.7.0-r4)
|
||||
#7 10.82 (52/52) Installing shadow (4.17.3-r0)
|
||||
#7 10.88 Executing busybox-1.37.0-r18.trigger
|
||||
#7 10.90 OK: 274 MiB in 66 packages
|
||||
#7 DONE 14.4s
|
||||
|
||||
#8 [builder 3/15] RUN mkdir -p /app
|
||||
#8 DONE 0.5s
|
||||
|
||||
#9 [builder 4/15] COPY api /app/api
|
||||
#9 DONE 0.3s
|
||||
|
||||
#10 [builder 5/15] COPY back /app/back
|
||||
#10 DONE 0.3s
|
||||
|
||||
#11 [builder 6/15] COPY config /app/config
|
||||
#11 DONE 0.3s
|
||||
|
||||
#12 [builder 7/15] COPY db /app/db
|
||||
#12 DONE 0.3s
|
||||
|
||||
#13 [builder 8/15] COPY dockerfiles /app/dockerfiles
|
||||
#13 DONE 0.3s
|
||||
|
||||
#14 [builder 9/15] COPY front /app/front
|
||||
#14 DONE 0.4s
|
||||
|
||||
#15 [builder 10/15] COPY server /app/server
|
||||
#15 DONE 0.3s
|
||||
|
||||
#16 [builder 11/15] COPY install/crontab /etc/crontabs/root
|
||||
#16 DONE 0.3s
|
||||
|
||||
#17 [builder 12/15] COPY dockerfiles/start* /start*.sh
|
||||
#17 DONE 0.3s
|
||||
|
||||
#18 [builder 13/15] RUN pip install openwrt-luci-rpc asusrouter asyncio aiohttp graphene flask flask-cors unifi-sm-api tplink-omada-client wakeonlan pycryptodome requests paho-mqtt scapy cron-converter pytz json2table dhcp-leases pyunifi speedtest-cli chardet python-nmap dnspython librouteros yattag git+https://github.com/foreign-sub/aiofreepybox.git
|
||||
#18 0.737 Collecting git+https://github.com/foreign-sub/aiofreepybox.git
|
||||
#18 0.737 Cloning https://github.com/foreign-sub/aiofreepybox.git to /tmp/pip-req-build-waf5_npl
|
||||
#18 0.738 Running command git clone --filter=blob:none --quiet https://github.com/foreign-sub/aiofreepybox.git /tmp/pip-req-build-waf5_npl
|
||||
#18 1.617 Resolved https://github.com/foreign-sub/aiofreepybox.git to commit 4ee18ea0f3e76edc839c48eb8df1da59c1baee3d
|
||||
#18 1.620 Installing build dependencies: started
|
||||
#18 3.337 Installing build dependencies: finished with status 'done'
|
||||
#18 3.337 Getting requirements to build wheel: started
|
||||
#18 3.491 Getting requirements to build wheel: finished with status 'done'
|
||||
#18 3.492 Preparing metadata (pyproject.toml): started
|
||||
#18 3.650 Preparing metadata (pyproject.toml): finished with status 'done'
|
||||
#18 3.724 Collecting openwrt-luci-rpc
|
||||
#18 3.753 Downloading openwrt_luci_rpc-1.1.17-py2.py3-none-any.whl.metadata (4.9 kB)
|
||||
#18 3.892 Collecting asusrouter
|
||||
#18 3.900 Downloading asusrouter-1.21.0-py3-none-any.whl.metadata (33 kB)
|
||||
#18 3.999 Collecting asyncio
|
||||
#18 4.007 Downloading asyncio-4.0.0-py3-none-any.whl.metadata (994 bytes)
|
||||
#18 4.576 Collecting aiohttp
|
||||
#18 4.582 Downloading aiohttp-3.12.15-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (7.7 kB)
|
||||
#18 4.729 Collecting graphene
|
||||
#18 4.735 Downloading graphene-3.4.3-py2.py3-none-any.whl.metadata (6.9 kB)
|
||||
#18 4.858 Collecting flask
|
||||
#18 4.866 Downloading flask-3.1.2-py3-none-any.whl.metadata (3.2 kB)
|
||||
#18 4.963 Collecting flask-cors
|
||||
#18 4.972 Downloading flask_cors-6.0.1-py3-none-any.whl.metadata (5.3 kB)
|
||||
#18 5.055 Collecting unifi-sm-api
|
||||
#18 5.065 Downloading unifi_sm_api-0.2.1-py3-none-any.whl.metadata (2.3 kB)
|
||||
#18 5.155 Collecting tplink-omada-client
|
||||
#18 5.166 Downloading tplink_omada_client-1.4.4-py3-none-any.whl.metadata (3.5 kB)
|
||||
#18 5.262 Collecting wakeonlan
|
||||
#18 5.274 Downloading wakeonlan-3.1.0-py3-none-any.whl.metadata (4.3 kB)
|
||||
#18 5.500 Collecting pycryptodome
|
||||
#18 5.505 Downloading pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl.metadata (3.4 kB)
|
||||
#18 5.653 Collecting requests
|
||||
#18 5.660 Downloading requests-2.32.5-py3-none-any.whl.metadata (4.9 kB)
|
||||
#18 5.764 Collecting paho-mqtt
|
||||
#18 5.775 Downloading paho_mqtt-2.1.0-py3-none-any.whl.metadata (23 kB)
|
||||
#18 5.890 Collecting scapy
|
||||
#18 5.902 Downloading scapy-2.6.1-py3-none-any.whl.metadata (5.6 kB)
|
||||
#18 6.002 Collecting cron-converter
|
||||
#18 6.013 Downloading cron_converter-1.2.2-py3-none-any.whl.metadata (8.1 kB)
|
||||
#18 6.187 Collecting pytz
|
||||
#18 6.193 Downloading pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
|
||||
#18 6.285 Collecting json2table
|
||||
#18 6.294 Downloading json2table-1.1.5-py2.py3-none-any.whl.metadata (6.0 kB)
|
||||
#18 6.381 Collecting dhcp-leases
|
||||
#18 6.387 Downloading dhcp_leases-0.1.6-py3-none-any.whl.metadata (5.9 kB)
|
||||
#18 6.461 Collecting pyunifi
|
||||
#18 6.471 Downloading pyunifi-2.21-py3-none-any.whl.metadata (274 bytes)
|
||||
#18 6.582 Collecting speedtest-cli
|
||||
#18 6.596 Downloading speedtest_cli-2.1.3-py2.py3-none-any.whl.metadata (6.8 kB)
|
||||
#18 6.767 Collecting chardet
|
||||
#18 6.780 Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)
|
||||
#18 6.878 Collecting python-nmap
|
||||
#18 6.886 Downloading python-nmap-0.7.1.tar.gz (44 kB)
|
||||
#18 6.937 Installing build dependencies: started
|
||||
#18 8.245 Installing build dependencies: finished with status 'done'
|
||||
#18 8.246 Getting requirements to build wheel: started
|
||||
#18 8.411 Getting requirements to build wheel: finished with status 'done'
|
||||
#18 8.412 Preparing metadata (pyproject.toml): started
|
||||
#18 8.575 Preparing metadata (pyproject.toml): finished with status 'done'
|
||||
#18 8.648 Collecting dnspython
|
||||
#18 8.654 Downloading dnspython-2.8.0-py3-none-any.whl.metadata (5.7 kB)
|
||||
#18 8.741 Collecting librouteros
|
||||
#18 8.752 Downloading librouteros-3.4.1-py3-none-any.whl.metadata (1.6 kB)
|
||||
#18 8.869 Collecting yattag
|
||||
#18 8.881 Downloading yattag-1.16.1.tar.gz (29 kB)
|
||||
#18 8.925 Installing build dependencies: started
|
||||
#18 10.23 Installing build dependencies: finished with status 'done'
|
||||
#18 10.23 Getting requirements to build wheel: started
|
||||
#18 10.38 Getting requirements to build wheel: finished with status 'done'
|
||||
#18 10.39 Preparing metadata (pyproject.toml): started
|
||||
#18 10.55 Preparing metadata (pyproject.toml): finished with status 'done'
|
||||
#18 10.60 Collecting Click>=6.0 (from openwrt-luci-rpc)
|
||||
#18 10.60 Downloading click-8.3.0-py3-none-any.whl.metadata (2.6 kB)
|
||||
#18 10.70 Collecting packaging>=19.1 (from openwrt-luci-rpc)
|
||||
#18 10.71 Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB)
|
||||
#18 10.87 Collecting urllib3>=1.26.14 (from asusrouter)
|
||||
#18 10.88 Downloading urllib3-2.5.0-py3-none-any.whl.metadata (6.5 kB)
|
||||
#18 10.98 Collecting xmltodict>=0.12.0 (from asusrouter)
|
||||
#18 10.98 Downloading xmltodict-1.0.2-py3-none-any.whl.metadata (15 kB)
|
||||
#18 11.09 Collecting aiohappyeyeballs>=2.5.0 (from aiohttp)
|
||||
#18 11.10 Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB)
|
||||
#18 11.19 Collecting aiosignal>=1.4.0 (from aiohttp)
|
||||
#18 11.20 Downloading aiosignal-1.4.0-py3-none-any.whl.metadata (3.7 kB)
|
||||
#18 11.32 Collecting attrs>=17.3.0 (from aiohttp)
|
||||
#18 11.33 Downloading attrs-25.3.0-py3-none-any.whl.metadata (10 kB)
|
||||
#18 11.47 Collecting frozenlist>=1.1.1 (from aiohttp)
|
||||
#18 11.47 Downloading frozenlist-1.7.0-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (18 kB)
|
||||
#18 11.76 Collecting multidict<7.0,>=4.5 (from aiohttp)
|
||||
#18 11.77 Downloading multidict-6.6.4-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (5.3 kB)
|
||||
#18 11.87 Collecting propcache>=0.2.0 (from aiohttp)
|
||||
#18 11.88 Downloading propcache-0.3.2-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (12 kB)
|
||||
#18 12.19 Collecting yarl<2.0,>=1.17.0 (from aiohttp)
|
||||
#18 12.20 Downloading yarl-1.20.1-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (73 kB)
|
||||
#18 12.31 Collecting graphql-core<3.3,>=3.1 (from graphene)
|
||||
#18 12.32 Downloading graphql_core-3.2.6-py3-none-any.whl.metadata (11 kB)
|
||||
#18 12.41 Collecting graphql-relay<3.3,>=3.1 (from graphene)
|
||||
#18 12.42 Downloading graphql_relay-3.2.0-py3-none-any.whl.metadata (12 kB)
|
||||
#18 12.50 Collecting python-dateutil<3,>=2.7.0 (from graphene)
|
||||
#18 12.51 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
|
||||
#18 12.61 Collecting typing-extensions<5,>=4.7.1 (from graphene)
|
||||
#18 12.61 Downloading typing_extensions-4.15.0-py3-none-any.whl.metadata (3.3 kB)
|
||||
#18 12.71 Collecting blinker>=1.9.0 (from flask)
|
||||
#18 12.72 Downloading blinker-1.9.0-py3-none-any.whl.metadata (1.6 kB)
|
||||
#18 12.84 Collecting itsdangerous>=2.2.0 (from flask)
|
||||
#18 12.85 Downloading itsdangerous-2.2.0-py3-none-any.whl.metadata (1.9 kB)
|
||||
#18 12.97 Collecting jinja2>=3.1.2 (from flask)
|
||||
#18 12.98 Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
|
||||
#18 13.15 Collecting markupsafe>=2.1.1 (from flask)
|
||||
#18 13.15 Downloading MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (4.0 kB)
|
||||
#18 13.28 Collecting werkzeug>=3.1.0 (from flask)
|
||||
#18 13.29 Downloading werkzeug-3.1.3-py3-none-any.whl.metadata (3.7 kB)
|
||||
#18 13.42 Collecting awesomeversion>=22.9.0 (from tplink-omada-client)
|
||||
#18 13.42 Downloading awesomeversion-25.8.0-py3-none-any.whl.metadata (9.8 kB)
|
||||
#18 13.59 Collecting charset_normalizer<4,>=2 (from requests)
|
||||
#18 13.59 Downloading charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_x86_64.whl.metadata (36 kB)
|
||||
#18 13.77 Collecting idna<4,>=2.5 (from requests)
|
||||
#18 13.78 Downloading idna-3.10-py3-none-any.whl.metadata (10 kB)
|
||||
#18 13.94 Collecting certifi>=2017.4.17 (from requests)
|
||||
#18 13.94 Downloading certifi-2025.8.3-py3-none-any.whl.metadata (2.4 kB)
|
||||
#18 14.06 Collecting toml<0.11.0,>=0.10.2 (from librouteros)
|
||||
#18 14.07 Downloading toml-0.10.2-py2.py3-none-any.whl.metadata (7.1 kB)
|
||||
#18 14.25 Collecting six>=1.5 (from python-dateutil<3,>=2.7.0->graphene)
|
||||
#18 14.26 Downloading six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
|
||||
#18 14.33 Downloading openwrt_luci_rpc-1.1.17-py2.py3-none-any.whl (9.5 kB)
|
||||
#18 14.37 Downloading asusrouter-1.21.0-py3-none-any.whl (131 kB)
|
||||
#18 14.43 Downloading asyncio-4.0.0-py3-none-any.whl (5.6 kB)
|
||||
#18 14.47 Downloading aiohttp-3.12.15-cp312-cp312-musllinux_1_2_x86_64.whl (1.7 MB)
|
||||
#18 14.67 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 8.3 MB/s eta 0:00:00
|
||||
#18 14.68 Downloading graphene-3.4.3-py2.py3-none-any.whl (114 kB)
|
||||
#18 14.73 Downloading flask-3.1.2-py3-none-any.whl (103 kB)
|
||||
#18 14.78 Downloading flask_cors-6.0.1-py3-none-any.whl (13 kB)
|
||||
#18 14.84 Downloading unifi_sm_api-0.2.1-py3-none-any.whl (16 kB)
|
||||
#18 14.88 Downloading tplink_omada_client-1.4.4-py3-none-any.whl (46 kB)
|
||||
#18 14.93 Downloading wakeonlan-3.1.0-py3-none-any.whl (5.0 kB)
|
||||
#18 14.99 Downloading pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl (2.3 MB)
|
||||
#18 15.23 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 8.9 MB/s eta 0:00:00
|
||||
#18 15.24 Downloading requests-2.32.5-py3-none-any.whl (64 kB)
|
||||
#18 15.30 Downloading paho_mqtt-2.1.0-py3-none-any.whl (67 kB)
|
||||
#18 15.34 Downloading scapy-2.6.1-py3-none-any.whl (2.4 MB)
|
||||
#18 15.62 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4/2.4 MB 8.5 MB/s eta 0:00:00
|
||||
#18 15.63 Downloading cron_converter-1.2.2-py3-none-any.whl (13 kB)
|
||||
#18 15.67 Downloading pytz-2025.2-py2.py3-none-any.whl (509 kB)
|
||||
#18 15.76 Downloading json2table-1.1.5-py2.py3-none-any.whl (8.7 kB)
|
||||
#18 15.81 Downloading dhcp_leases-0.1.6-py3-none-any.whl (11 kB)
|
||||
#18 15.86 Downloading pyunifi-2.21-py3-none-any.whl (11 kB)
|
||||
#18 15.90 Downloading speedtest_cli-2.1.3-py2.py3-none-any.whl (23 kB)
|
||||
#18 15.95 Downloading chardet-5.2.0-py3-none-any.whl (199 kB)
|
||||
#18 16.01 Downloading dnspython-2.8.0-py3-none-any.whl (331 kB)
|
||||
#18 16.10 Downloading librouteros-3.4.1-py3-none-any.whl (16 kB)
|
||||
#18 16.14 Downloading aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB)
|
||||
#18 16.20 Downloading aiosignal-1.4.0-py3-none-any.whl (7.5 kB)
|
||||
#18 16.24 Downloading attrs-25.3.0-py3-none-any.whl (63 kB)
|
||||
#18 16.30 Downloading awesomeversion-25.8.0-py3-none-any.whl (15 kB)
|
||||
#18 16.34 Downloading blinker-1.9.0-py3-none-any.whl (8.5 kB)
|
||||
#18 16.39 Downloading certifi-2025.8.3-py3-none-any.whl (161 kB)
|
||||
#18 16.45 Downloading charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_x86_64.whl (153 kB)
|
||||
#18 16.50 Downloading click-8.3.0-py3-none-any.whl (107 kB)
|
||||
#18 16.55 Downloading frozenlist-1.7.0-cp312-cp312-musllinux_1_2_x86_64.whl (237 kB)
|
||||
#18 16.62 Downloading graphql_core-3.2.6-py3-none-any.whl (203 kB)
|
||||
#18 16.69 Downloading graphql_relay-3.2.0-py3-none-any.whl (16 kB)
|
||||
#18 16.73 Downloading idna-3.10-py3-none-any.whl (70 kB)
|
||||
#18 16.79 Downloading itsdangerous-2.2.0-py3-none-any.whl (16 kB)
|
||||
#18 16.84 Downloading jinja2-3.1.6-py3-none-any.whl (134 kB)
|
||||
#18 16.96 Downloading MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl (23 kB)
|
||||
#18 17.02 Downloading multidict-6.6.4-cp312-cp312-musllinux_1_2_x86_64.whl (251 kB)
|
||||
#18 17.09 Downloading packaging-25.0-py3-none-any.whl (66 kB)
|
||||
#18 17.14 Downloading propcache-0.3.2-cp312-cp312-musllinux_1_2_x86_64.whl (222 kB)
|
||||
#18 17.21 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
|
||||
#18 17.28 Downloading toml-0.10.2-py2.py3-none-any.whl (16 kB)
|
||||
#18 17.33 Downloading typing_extensions-4.15.0-py3-none-any.whl (44 kB)
|
||||
#18 17.39 Downloading urllib3-2.5.0-py3-none-any.whl (129 kB)
|
||||
#18 17.44 Downloading werkzeug-3.1.3-py3-none-any.whl (224 kB)
|
||||
#18 17.51 Downloading xmltodict-1.0.2-py3-none-any.whl (13 kB)
|
||||
#18 17.56 Downloading yarl-1.20.1-cp312-cp312-musllinux_1_2_x86_64.whl (374 kB)
|
||||
#18 17.65 Downloading six-1.17.0-py2.py3-none-any.whl (11 kB)
|
||||
#18 17.77 Building wheels for collected packages: python-nmap, yattag, aiofreepybox
|
||||
#18 17.77 Building wheel for python-nmap (pyproject.toml): started
|
||||
#18 17.95 Building wheel for python-nmap (pyproject.toml): finished with status 'done'
|
||||
#18 17.96 Created wheel for python-nmap: filename=python_nmap-0.7.1-py2.py3-none-any.whl size=20679 sha256=ecd9b14109651cfaa5bf035f90076b9442985cc254fa5f8a49868fc896e86edb
|
||||
#18 17.96 Stored in directory: /root/.cache/pip/wheels/06/fc/d4/0957e1d9942e696188208772ea0abf909fe6eb3d9dff6e5a9e
|
||||
#18 17.96 Building wheel for yattag (pyproject.toml): started
|
||||
#18 18.14 Building wheel for yattag (pyproject.toml): finished with status 'done'
|
||||
#18 18.14 Created wheel for yattag: filename=yattag-1.16.1-py3-none-any.whl size=15930 sha256=2135fc2034a3847c81eb6a0d7b85608e8272339fa5c1961f87b02dfe6d74d0ad
|
||||
#18 18.14 Stored in directory: /root/.cache/pip/wheels/d2/2f/52/049ff4f7c8c9c932b2ece7ec800d7facf2a141ac5ab0ce7e51
|
||||
#18 18.15 Building wheel for aiofreepybox (pyproject.toml): started
|
||||
#18 18.36 Building wheel for aiofreepybox (pyproject.toml): finished with status 'done'
|
||||
#18 18.36 Created wheel for aiofreepybox: filename=aiofreepybox-6.0.0-py3-none-any.whl size=60051 sha256=dbdee5350b10b6550ede50bc779381b7f39f1e5d5da889f2ee98cb5a869d3425
|
||||
#18 18.36 Stored in directory: /tmp/pip-ephem-wheel-cache-93bgc4e2/wheels/3c/d3/ae/fb97a84a29a5fbe8517de58d67e66586505440af35981e0dd3
|
||||
#18 18.36 Successfully built python-nmap yattag aiofreepybox
|
||||
#18 18.45 Installing collected packages: yattag, speedtest-cli, pytz, python-nmap, json2table, dhcp-leases, xmltodict, wakeonlan, urllib3, typing-extensions, toml, six, scapy, pycryptodome, propcache, paho-mqtt, packaging, multidict, markupsafe, itsdangerous, idna, graphql-core, frozenlist, dnspython, Click, charset_normalizer, chardet, certifi, blinker, awesomeversion, attrs, asyncio, aiohappyeyeballs, yarl, werkzeug, requests, python-dateutil, librouteros, jinja2, graphql-relay, aiosignal, unifi-sm-api, pyunifi, openwrt-luci-rpc, graphene, flask, cron-converter, aiohttp, tplink-omada-client, flask-cors, asusrouter, aiofreepybox
|
||||
#18 24.35 Successfully installed Click-8.3.0 aiofreepybox-6.0.0 aiohappyeyeballs-2.6.1 aiohttp-3.12.15 aiosignal-1.4.0 asusrouter-1.21.0 asyncio-4.0.0 attrs-25.3.0 awesomeversion-25.8.0 blinker-1.9.0 certifi-2025.8.3 chardet-5.2.0 charset_normalizer-3.4.3 cron-converter-1.2.2 dhcp-leases-0.1.6 dnspython-2.8.0 flask-3.1.2 flask-cors-6.0.1 frozenlist-1.7.0 graphene-3.4.3 graphql-core-3.2.6 graphql-relay-3.2.0 idna-3.10 itsdangerous-2.2.0 jinja2-3.1.6 json2table-1.1.5 librouteros-3.4.1 markupsafe-3.0.2 multidict-6.6.4 openwrt-luci-rpc-1.1.17 packaging-25.0 paho-mqtt-2.1.0 propcache-0.3.2 pycryptodome-3.23.0 python-dateutil-2.9.0.post0 python-nmap-0.7.1 pytz-2025.2 pyunifi-2.21 requests-2.32.5 scapy-2.6.1 six-1.17.0 speedtest-cli-2.1.3 toml-0.10.2 tplink-omada-client-1.4.4 typing-extensions-4.15.0 unifi-sm-api-0.2.1 urllib3-2.5.0 wakeonlan-3.1.0 werkzeug-3.1.3 xmltodict-1.0.2 yarl-1.20.1 yattag-1.16.1
|
||||
#18 24.47
|
||||
#18 24.47 [notice] A new release of pip is available: 25.0.1 -> 25.2
|
||||
#18 24.47 [notice] To update, run: pip install --upgrade pip
|
||||
#18 DONE 25.1s
|
||||
|
||||
#19 [builder 14/15] RUN bash -c "find /app -type d -exec chmod 750 {} \;" && bash -c "find /app -type f -exec chmod 640 {} \;" && bash -c "find /app -type f \( -name '*.sh' -o -name '*.py' -o -name 'speedtest-cli' \) -exec chmod 750 {} \;"
|
||||
#19 DONE 11.9s
|
||||
|
||||
#20 [builder 15/15] COPY install/freebox_certificate.pem /opt/venv/lib/python3.12/site-packages/aiofreepybox/freebox_certificates.pem
|
||||
#20 DONE 0.4s
|
||||
|
||||
#21 [runner 2/14] COPY --from=builder /opt/venv /opt/venv
|
||||
#21 DONE 0.8s
|
||||
|
||||
#22 [runner 3/14] COPY --from=builder /usr/sbin/usermod /usr/sbin/groupmod /usr/sbin/
|
||||
#22 DONE 0.4s
|
||||
|
||||
#23 [runner 4/14] RUN apk update --no-cache && apk add --no-cache bash libbsd zip lsblk gettext-envsubst sudo mtr tzdata s6-overlay && apk add --no-cache curl arp-scan iproute2 iproute2-ss nmap nmap-scripts traceroute nbtscan avahi avahi-tools openrc dbus net-tools net-snmp-tools bind-tools awake ca-certificates && apk add --no-cache sqlite php83 php83-fpm php83-cgi php83-curl php83-sqlite3 php83-session && apk add --no-cache python3 nginx && ln -s /usr/bin/awake /usr/bin/wakeonlan && bash -c "install -d -m 750 -o nginx -g www-data /app /app" && rm -f /etc/nginx/http.d/default.conf
|
||||
#23 0.487 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
|
||||
#23 0.696 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
|
||||
#23 1.156 v3.22.1-472-ga67443520d6 [https://dl-cdn.alpinelinux.org/alpine/v3.22/main]
|
||||
#23 1.156 v3.22.1-473-gcd551a4e006 [https://dl-cdn.alpinelinux.org/alpine/v3.22/community]
|
||||
#23 1.156 OK: 26326 distinct packages available
|
||||
#23 1.195 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
|
||||
#23 1.276 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
|
||||
#23 1.568 (1/38) Installing ncurses-terminfo-base (6.5_p20250503-r0)
|
||||
#23 1.580 (2/38) Installing libncursesw (6.5_p20250503-r0)
|
||||
#23 1.629 (3/38) Installing readline (8.2.13-r1)
|
||||
#23 1.659 (4/38) Installing bash (5.2.37-r0)
|
||||
#23 1.723 Executing bash-5.2.37-r0.post-install
|
||||
#23 1.740 (5/38) Installing libintl (0.24.1-r0)
|
||||
#23 1.749 (6/38) Installing gettext-envsubst (0.24.1-r0)
|
||||
#23 1.775 (7/38) Installing libmd (1.1.0-r0)
|
||||
#23 1.782 (8/38) Installing libbsd (0.12.2-r0)
|
||||
#23 1.807 (9/38) Installing libeconf (0.6.3-r0)
|
||||
#23 1.812 (10/38) Installing libblkid (2.41-r9)
|
||||
#23 1.831 (11/38) Installing libmount (2.41-r9)
|
||||
#23 1.857 (12/38) Installing libsmartcols (2.41-r9)
|
||||
#23 1.872 (13/38) Installing lsblk (2.41-r9)
|
||||
#23 1.886 (14/38) Installing libcap2 (2.76-r0)
|
||||
#23 1.897 (15/38) Installing jansson (2.14.1-r0)
|
||||
#23 1.910 (16/38) Installing mtr (0.96-r0)
|
||||
#23 1.948 (17/38) Installing skalibs-libs (2.14.4.0-r0)
|
||||
#23 1.966 (18/38) Installing execline-libs (2.9.7.0-r0)
|
||||
#23 1.974 (19/38) Installing execline (2.9.7.0-r0)
|
||||
#23 1.996 Executing execline-2.9.7.0-r0.post-install
|
||||
#23 2.004 (20/38) Installing s6-ipcserver (2.13.2.0-r0)
|
||||
#23 2.010 (21/38) Installing s6-libs (2.13.2.0-r0)
|
||||
#23 2.016 (22/38) Installing s6 (2.13.2.0-r0)
|
||||
#23 2.033 Executing s6-2.13.2.0-r0.pre-install
|
||||
#23 2.159 (23/38) Installing s6-rc-libs (0.5.6.0-r0)
|
||||
#23 2.164 (24/38) Installing s6-rc (0.5.6.0-r0)
|
||||
#23 2.175 (25/38) Installing s6-linux-init (1.1.3.0-r0)
|
||||
#23 2.185 (26/38) Installing s6-portable-utils (2.3.1.0-r0)
|
||||
#23 2.193 (27/38) Installing s6-linux-utils (2.6.3.0-r0)
|
||||
#23 2.200 (28/38) Installing s6-dns-libs (2.4.1.0-r0)
|
||||
#23 2.208 (29/38) Installing s6-dns (2.4.1.0-r0)
|
||||
#23 2.222 (30/38) Installing bearssl-libs (0.6_git20241009-r0)
|
||||
#23 2.254 (31/38) Installing s6-networking-libs (2.7.1.0-r0)
|
||||
#23 2.264 (32/38) Installing s6-networking (2.7.1.0-r0)
|
||||
#23 2.286 (33/38) Installing s6-overlay-helpers (0.1.2.0-r0)
|
||||
#23 2.355 (34/38) Installing s6-overlay (3.2.0.3-r0)
|
||||
#23 2.380 (35/38) Installing sudo (1.9.17_p2-r0)
|
||||
#23 2.511 (36/38) Installing tzdata (2025b-r0)
|
||||
#23 2.641 (37/38) Installing unzip (6.0-r15)
|
||||
#23 2.659 (38/38) Installing zip (3.0-r13)
|
||||
#23 2.694 Executing busybox-1.37.0-r18.trigger
|
||||
#23 2.725 OK: 16 MiB in 54 packages
|
||||
#23 2.778 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
|
||||
#23 2.918 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
|
||||
#23 3.218 (1/77) Installing libpcap (1.10.5-r1)
|
||||
#23 3.234 (2/77) Installing arp-scan (1.10.0-r2)
|
||||
#23 3.289 (3/77) Installing dbus-libs (1.16.2-r1)
|
||||
#23 3.307 (4/77) Installing avahi-libs (0.8-r21)
|
||||
#23 3.315 (5/77) Installing libdaemon (0.14-r6)
|
||||
#23 3.322 (6/77) Installing libevent (2.1.12-r8)
|
||||
#23 3.355 (7/77) Installing libexpat (2.7.2-r0)
|
||||
#23 3.368 (8/77) Installing avahi (0.8-r21)
|
||||
#23 3.387 Executing avahi-0.8-r21.pre-install
|
||||
#23 3.465 (9/77) Installing gdbm (1.24-r0)
|
||||
#23 3.477 (10/77) Installing avahi-tools (0.8-r21)
|
||||
#23 3.483 (11/77) Installing libbz2 (1.0.8-r6)
|
||||
#23 3.490 (12/77) Installing libffi (3.4.8-r0)
|
||||
#23 3.496 (13/77) Installing xz-libs (5.8.1-r0)
|
||||
#23 3.517 (14/77) Installing libgcc (14.2.0-r6)
|
||||
#23 3.529 (15/77) Installing libstdc++ (14.2.0-r6)
|
||||
#23 3.613 (16/77) Installing mpdecimal (4.0.1-r0)
|
||||
#23 3.628 (17/77) Installing libpanelw (6.5_p20250503-r0)
|
||||
#23 3.634 (18/77) Installing sqlite-libs (3.49.2-r1)
|
||||
#23 3.783 (19/77) Installing python3 (3.12.11-r0)
|
||||
#23 4.494 (20/77) Installing python3-pycache-pyc0 (3.12.11-r0)
|
||||
#23 4.915 (21/77) Installing pyc (3.12.11-r0)
|
||||
#23 4.915 (22/77) Installing py3-awake-pyc (1.0-r12)
|
||||
#23 4.922 (23/77) Installing python3-pyc (3.12.11-r0)
|
||||
#23 4.922 (24/77) Installing py3-awake (1.0-r12)
|
||||
#23 4.928 (25/77) Installing awake (1.0-r12)
|
||||
#23 4.932 (26/77) Installing fstrm (0.6.1-r4)
|
||||
#23 4.940 (27/77) Installing krb5-conf (1.0-r2)
|
||||
#23 5.017 (28/77) Installing libcom_err (1.47.2-r2)
|
||||
#23 5.026 (29/77) Installing keyutils-libs (1.6.3-r4)
|
||||
#23 5.033 (30/77) Installing libverto (0.3.2-r2)
|
||||
#23 5.039 (31/77) Installing krb5-libs (1.21.3-r0)
|
||||
#23 5.115 (32/77) Installing json-c (0.18-r1)
|
||||
#23 5.123 (33/77) Installing nghttp2-libs (1.65.0-r0)
|
||||
#23 5.136 (34/77) Installing protobuf-c (1.5.2-r0)
|
||||
#23 5.142 (35/77) Installing userspace-rcu (0.15.2-r0)
|
||||
#23 5.161 (36/77) Installing libuv (1.51.0-r0)
|
||||
#23 5.178 (37/77) Installing libxml2 (2.13.8-r0)
|
||||
#23 5.232 (38/77) Installing bind-libs (9.20.13-r0)
|
||||
#23 5.355 (39/77) Installing bind-tools (9.20.13-r0)
|
||||
#23 5.395 (40/77) Installing ca-certificates (20250619-r0)
|
||||
#23 5.518 (41/77) Installing brotli-libs (1.1.0-r2)
|
||||
#23 5.559 (42/77) Installing c-ares (1.34.5-r0)
|
||||
#23 5.573 (43/77) Installing libunistring (1.3-r0)
|
||||
#23 5.645 (44/77) Installing libidn2 (2.3.7-r0)
|
||||
#23 5.664 (45/77) Installing libpsl (0.21.5-r3)
|
||||
#23 5.676 (46/77) Installing zstd-libs (1.5.7-r0)
|
||||
#23 5.720 (47/77) Installing libcurl (8.14.1-r1)
|
||||
#23 5.753 (48/77) Installing curl (8.14.1-r1)
|
||||
#23 5.778 (49/77) Installing dbus (1.16.2-r1)
|
||||
#23 5.796 Executing dbus-1.16.2-r1.pre-install
|
||||
#23 5.869 Executing dbus-1.16.2-r1.post-install
|
||||
#23 5.887 (50/77) Installing dbus-daemon-launch-helper (1.16.2-r1)
|
||||
#23 5.896 (51/77) Installing libelf (0.193-r0)
|
||||
#23 5.908 (52/77) Installing libmnl (1.0.5-r2)
|
||||
#23 5.915 (53/77) Installing iproute2-minimal (6.15.0-r0)
|
||||
#23 5.954 (54/77) Installing libxtables (1.8.11-r1)
|
||||
#23 5.963 (55/77) Installing iproute2-tc (6.15.0-r0)
|
||||
#23 6.001 (56/77) Installing iproute2-ss (6.15.0-r0)
|
||||
#23 6.014 (57/77) Installing iproute2 (6.15.0-r0)
|
||||
#23 6.042 Executing iproute2-6.15.0-r0.post-install
|
||||
#23 6.047 (58/77) Installing nbtscan (1.7.2-r0)
|
||||
#23 6.053 (59/77) Installing net-snmp-libs (5.9.4-r1)
|
||||
#23 6.112 (60/77) Installing net-snmp-agent-libs (5.9.4-r1)
|
||||
#23 6.179 (61/77) Installing net-snmp-tools (5.9.4-r1)
|
||||
#23 6.205 (62/77) Installing mii-tool (2.10-r3)
|
||||
#23 6.211 (63/77) Installing net-tools (2.10-r3)
|
||||
#23 6.235 (64/77) Installing lua5.4-libs (5.4.7-r0)
|
||||
#23 6.258 (65/77) Installing libssh2 (1.11.1-r0)
|
||||
#23 6.279 (66/77) Installing nmap (7.97-r0)
|
||||
#23 6.524 (67/77) Installing nmap-nselibs (7.97-r0)
|
||||
#23 6.729 (68/77) Installing nmap-scripts (7.97-r0)
|
||||
#23 6.842 (69/77) Installing bridge (1.5-r5)
|
||||
#23 6.904 (70/77) Installing ifupdown-ng (0.12.1-r7)
|
||||
#23 6.915 (71/77) Installing ifupdown-ng-iproute2 (0.12.1-r7)
|
||||
#23 6.920 (72/77) Installing openrc-user (0.62.6-r0)
|
||||
#23 6.924 (73/77) Installing openrc (0.62.6-r0)
|
||||
#23 7.013 Executing openrc-0.62.6-r0.post-install
|
||||
#23 7.016 (74/77) Installing avahi-openrc (0.8-r21)
|
||||
#23 7.021 (75/77) Installing dbus-openrc (1.16.2-r1)
|
||||
#23 7.026 (76/77) Installing s6-openrc (2.13.2.0-r0)
|
||||
#23 7.032 (77/77) Installing traceroute (2.1.6-r0)
|
||||
#23 7.040 Executing busybox-1.37.0-r18.trigger
|
||||
#23 7.042 Executing ca-certificates-20250619-r0.trigger
|
||||
#23 7.101 Executing dbus-1.16.2-r1.trigger
|
||||
#23 7.104 OK: 102 MiB in 131 packages
|
||||
#23 7.156 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
|
||||
#23 7.243 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
|
||||
#23 7.543 (1/12) Installing php83-common (8.3.24-r0)
|
||||
#23 7.551 (2/12) Installing argon2-libs (20190702-r5)
|
||||
#23 7.557 (3/12) Installing libedit (20250104.3.1-r1)
|
||||
#23 7.568 (4/12) Installing pcre2 (10.43-r1)
|
||||
#23 7.600 (5/12) Installing php83 (8.3.24-r0)
|
||||
#23 7.777 (6/12) Installing php83-cgi (8.3.24-r0)
|
||||
#23 7.953 (7/12) Installing php83-curl (8.3.24-r0)
|
||||
#23 7.968 (8/12) Installing acl-libs (2.3.2-r1)
|
||||
#23 7.975 (9/12) Installing php83-fpm (8.3.24-r0)
|
||||
#23 8.193 (10/12) Installing php83-session (8.3.24-r0)
|
||||
#23 8.204 (11/12) Installing php83-sqlite3 (8.3.24-r0)
|
||||
#23 8.213 (12/12) Installing sqlite (3.49.2-r1)
|
||||
#23 8.309 Executing busybox-1.37.0-r18.trigger
|
||||
#23 8.317 OK: 129 MiB in 143 packages
|
||||
#23 8.369 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/main/x86_64/APKINDEX.tar.gz
|
||||
#23 8.449 fetch https://dl-cdn.alpinelinux.org/alpine/v3.22/community/x86_64/APKINDEX.tar.gz
|
||||
#23 8.747 (1/2) Installing nginx (1.28.0-r3)
|
||||
#23 8.766 Executing nginx-1.28.0-r3.pre-install
|
||||
#23 8.863 Executing nginx-1.28.0-r3.post-install
|
||||
#23 8.865 (2/2) Installing nginx-openrc (1.28.0-r3)
|
||||
#23 8.870 Executing busybox-1.37.0-r18.trigger
|
||||
#23 8.873 OK: 130 MiB in 145 packages
|
||||
#23 DONE 9.5s
|
||||
|
||||
#24 [runner 5/14] COPY --from=builder --chown=nginx:www-data /app/ /app/
|
||||
#24 DONE 0.5s
|
||||
|
||||
#25 [runner 6/14] RUN mkdir -p /app/config /app/db /app/log/plugins
|
||||
#25 DONE 0.5s
|
||||
|
||||
#26 [runner 7/14] COPY --chmod=600 --chown=root:root install/crontab /etc/crontabs/root
|
||||
#26 DONE 0.3s
|
||||
|
||||
#27 [runner 8/14] COPY --chmod=755 dockerfiles/healthcheck.sh /usr/local/bin/healthcheck.sh
|
||||
#27 DONE 0.3s
|
||||
|
||||
#28 [runner 9/14] RUN touch /app/log/app.log && touch /app/log/execution_queue.log && touch /app/log/app_front.log && touch /app/log/app.php_errors.log && touch /app/log/stderr.log && touch /app/log/stdout.log && touch /app/log/db_is_locked.log && touch /app/log/IP_changes.log && touch /app/log/report_output.txt && touch /app/log/report_output.html && touch /app/log/report_output.json && touch /app/api/user_notifications.json
|
||||
#28 DONE 0.6s
|
||||
|
||||
#29 [runner 10/14] COPY dockerfiles /app/dockerfiles
|
||||
#29 DONE 0.3s
|
||||
|
||||
#30 [runner 11/14] RUN chmod +x /app/dockerfiles/*.sh
|
||||
#30 DONE 0.8s
|
||||
|
||||
#31 [runner 12/14] RUN /app/dockerfiles/init-nginx.sh && /app/dockerfiles/init-php-fpm.sh && /app/dockerfiles/init-crond.sh && /app/dockerfiles/init-backend.sh
|
||||
#31 0.417 Initializing nginx...
|
||||
#31 0.417 Setting webserver to address (0.0.0.0) and port (20211)
|
||||
#31 0.418 /app/dockerfiles/init-nginx.sh: line 5: /app/install/netalertx.template.conf: No such file or directory
|
||||
#31 0.611 nginx initialized.
|
||||
#31 0.612 Initializing php-fpm...
|
||||
#31 0.654 php-fpm initialized.
|
||||
#31 0.655 Initializing crond...
|
||||
#31 0.689 crond initialized.
|
||||
#31 0.690 Initializing backend...
|
||||
#31 12.19 Backend initialized.
|
||||
#31 DONE 12.3s
|
||||
|
||||
#32 [runner 13/14] RUN rm -rf /app/dockerfiles
|
||||
#32 DONE 0.6s
|
||||
|
||||
#33 [runner 14/14] RUN date +%s > /app/front/buildtimestamp.txt
|
||||
#33 DONE 0.6s
|
||||
|
||||
#34 exporting to image
|
||||
#34 exporting layers
|
||||
#34 exporting layers 2.4s done
|
||||
#34 writing image sha256:0afcbc41473de559eff0dd93250595494fe4d8ea620861e9e90d50a248fcefda 0.0s done
|
||||
#34 naming to docker.io/library/netalertx 0.0s done
|
||||
#34 DONE 2.5s
|
||||
@@ -1,674 +0,0 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||
@@ -1,169 +0,0 @@
|
||||
#!/usr/bin/with-contenv bash
|
||||
|
||||
echo "---------------------------------------------------------
|
||||
[INSTALL] Run init.sh
|
||||
---------------------------------------------------------"
|
||||
|
||||
DEFAULT_PUID=102
|
||||
DEFAULT_GID=82
|
||||
|
||||
PUID=${PUID:-${DEFAULT_PUID}}
|
||||
PGID=${PGID:-${DEFAULT_GID}}
|
||||
|
||||
echo "[INSTALL] Setting up user UID and GID"
|
||||
|
||||
if ! groupmod -o -g "$PGID" www-data && [ "$PGID" != "$DEFAULT_GID" ] ; then
|
||||
echo "Failed to set user GID to ${PGID}, trying with default GID ${DEFAULT_GID}"
|
||||
groupmod -o -g "$DEFAULT_GID" www-data
|
||||
fi
|
||||
if ! usermod -o -u "$PUID" nginx && [ "$PUID" != "$DEFAULT_PUID" ] ; then
|
||||
echo "Failed to set user UID to ${PUID}, trying with default PUID ${DEFAULT_PUID}"
|
||||
usermod -o -u "$DEFAULT_PUID" nginx
|
||||
fi
|
||||
|
||||
echo "
|
||||
---------------------------------------------------------
|
||||
GID/UID
|
||||
---------------------------------------------------------
|
||||
User UID: $(id -u nginx)
|
||||
User GID: $(getent group www-data | cut -d: -f3)
|
||||
---------------------------------------------------------"
|
||||
|
||||
chown nginx:nginx /run/nginx/ /var/log/nginx/ /var/lib/nginx/ /var/lib/nginx/tmp/
|
||||
chgrp www-data /var/www/localhost/htdocs/
|
||||
|
||||
export INSTALL_DIR=/app # Specify the installation directory here
|
||||
|
||||
# DO NOT CHANGE ANYTHING BELOW THIS LINE!
|
||||
|
||||
CONF_FILE="app.conf"
|
||||
NGINX_CONF_FILE=netalertx.conf
|
||||
DB_FILE="app.db"
|
||||
FULL_FILEDB_PATH="${INSTALL_DIR}/db/${DB_FILE}"
|
||||
NGINX_CONFIG_FILE="/etc/nginx/http.d/${NGINX_CONF_FILE}"
|
||||
OUI_FILE="/usr/share/arp-scan/ieee-oui.txt" # Define the path to ieee-oui.txt and ieee-iab.txt
|
||||
|
||||
INSTALL_DIR_OLD=/home/pi/pialert
|
||||
OLD_APP_NAME=pialert
|
||||
|
||||
# DO NOT CHANGE ANYTHING ABOVE THIS LINE!
|
||||
|
||||
# Check if script is run as root
|
||||
if [[ $EUID -ne 0 ]]; then
|
||||
echo "This script must be run as root. Please use 'sudo'."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# DANGER ZONE: ALWAYS_FRESH_INSTALL
|
||||
if [ "$ALWAYS_FRESH_INSTALL" = true ]; then
|
||||
echo "[INSTALL] ❗ ALERT /db and /config folders are cleared because the ALWAYS_FRESH_INSTALL is set to: $ALWAYS_FRESH_INSTALL❗"
|
||||
|
||||
# Delete content of "$INSTALL_DIR/config/"
|
||||
rm -rf "$INSTALL_DIR/config/"*
|
||||
rm -rf "$INSTALL_DIR_OLD/config/"*
|
||||
|
||||
# Delete content of "$INSTALL_DIR/db/"
|
||||
rm -rf "$INSTALL_DIR/db/"*
|
||||
rm -rf "$INSTALL_DIR_OLD/db/"*
|
||||
fi
|
||||
|
||||
# OVERRIDE settings: Handling APP_CONF_OVERRIDE
|
||||
# Check if APP_CONF_OVERRIDE is set
|
||||
|
||||
# remove old
|
||||
rm "${INSTALL_DIR}/config/app_conf_override.json"
|
||||
|
||||
if [ -z "$APP_CONF_OVERRIDE" ]; then
|
||||
echo "APP_CONF_OVERRIDE is not set. Skipping config file creation."
|
||||
else
|
||||
# Save the APP_CONF_OVERRIDE env variable as a JSON file
|
||||
echo "$APP_CONF_OVERRIDE" > "${INSTALL_DIR}/config/app_conf_override.json"
|
||||
echo "Config file saved to ${INSTALL_DIR}/config/app_conf_override.json"
|
||||
fi
|
||||
|
||||
# 🔻 FOR BACKWARD COMPATIBILITY - REMOVE AFTER 12/12/2025
|
||||
|
||||
# Check if pialert.db exists, then create a symbolic link to app.db
|
||||
if [ -f "${INSTALL_DIR_OLD}/db/${OLD_APP_NAME}.db" ]; then
|
||||
ln -s "${INSTALL_DIR_OLD}/db/${OLD_APP_NAME}.db" "${FULL_FILEDB_PATH}"
|
||||
fi
|
||||
|
||||
# Check if ${OLD_APP_NAME}.conf exists, then create a symbolic link to app.conf
|
||||
if [ -f "${INSTALL_DIR_OLD}/config/${OLD_APP_NAME}.conf" ]; then
|
||||
ln -s "${INSTALL_DIR_OLD}/config/${OLD_APP_NAME}.conf" "${INSTALL_DIR}/config/${CONF_FILE}"
|
||||
fi
|
||||
# 🔺 FOR BACKWARD COMPATIBILITY - REMOVE AFTER 12/12/2025
|
||||
|
||||
echo "[INSTALL] Copy starter ${DB_FILE} and ${CONF_FILE} if they don't exist"
|
||||
|
||||
# Copy starter app.db, app.conf if they don't exist
|
||||
cp -na "${INSTALL_DIR}/back/${CONF_FILE}" "${INSTALL_DIR}/config/${CONF_FILE}"
|
||||
cp -na "${INSTALL_DIR}/back/${DB_FILE}" "${FULL_FILEDB_PATH}"
|
||||
|
||||
# if custom variables not set we do not need to do anything
|
||||
if [ -n "${TZ}" ]; then
|
||||
FILECONF="${INSTALL_DIR}/config/${CONF_FILE}"
|
||||
echo "[INSTALL] Setup timezone"
|
||||
sed -i "\#^TIMEZONE=#c\TIMEZONE='${TZ}'" "${FILECONF}"
|
||||
|
||||
# set TimeZone in container
|
||||
cp /usr/share/zoneinfo/$TZ /etc/localtime
|
||||
echo $TZ > /etc/timezone
|
||||
fi
|
||||
|
||||
# if custom variables not set we do not need to do anything
|
||||
if [ -n "${LOADED_PLUGINS}" ]; then
|
||||
FILECONF="${INSTALL_DIR}/config/${CONF_FILE}"
|
||||
echo "[INSTALL] Setup custom LOADED_PLUGINS variable"
|
||||
sed -i "\#^LOADED_PLUGINS=#c\LOADED_PLUGINS=${LOADED_PLUGINS}" "${FILECONF}"
|
||||
fi
|
||||
|
||||
echo "[INSTALL] Setup NGINX"
|
||||
echo "Setting webserver to address ($LISTEN_ADDR) and port ($PORT)"
|
||||
envsubst '$INSTALL_DIR $LISTEN_ADDR $PORT' < "${INSTALL_DIR}/install/netalertx.template.conf" > "${NGINX_CONFIG_FILE}"
|
||||
|
||||
# Run the hardware vendors update at least once
|
||||
echo "[INSTALL] Run the hardware vendors update"
|
||||
|
||||
# Check if ieee-oui.txt or ieee-iab.txt exist
|
||||
if [ -f "${OUI_FILE}" ]; then
|
||||
echo "The file ieee-oui.txt exists. Skipping update_vendors.sh..."
|
||||
else
|
||||
echo "The file ieee-oui.txt does not exist. Running update_vendors..."
|
||||
|
||||
# Run the update_vendors.sh script
|
||||
if [ -f "${INSTALL_DIR}/back/update_vendors.sh" ]; then
|
||||
"${INSTALL_DIR}/back/update_vendors.sh"
|
||||
else
|
||||
echo "update_vendors.sh script not found in ${INSTALL_DIR}."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create an empty log files
|
||||
# Create the execution_queue.log and app_front.log files if they don't exist
|
||||
touch "${INSTALL_DIR}"/log/{app.log,execution_queue.log,app_front.log,app.php_errors.log,stderr.log,stdout.log,db_is_locked.log}
|
||||
touch "${INSTALL_DIR}"/api/user_notifications.json
|
||||
|
||||
# Create plugins sub-directory if it doesn't exist in case a custom log folder is used
|
||||
mkdir -p "${INSTALL_DIR}"/log/plugins
|
||||
|
||||
echo "[INSTALL] Fixing permissions after copied starter config & DB"
|
||||
chown -R nginx:www-data "${INSTALL_DIR}"
|
||||
|
||||
chmod 750 "${INSTALL_DIR}"/{config,log,db}
|
||||
find "${INSTALL_DIR}"/{config,log,db} -type f -exec chmod 640 {} \;
|
||||
|
||||
# Check if buildtimestamp.txt doesn't exist
|
||||
if [ ! -f "${INSTALL_DIR}/front/buildtimestamp.txt" ]; then
|
||||
# Create buildtimestamp.txt
|
||||
date +%s > "${INSTALL_DIR}/front/buildtimestamp.txt"
|
||||
chown nginx:www-data "${INSTALL_DIR}/front/buildtimestamp.txt"
|
||||
fi
|
||||
|
||||
echo -e "
|
||||
[ENV] PATH is ${PATH}
|
||||
[ENV] PORT is ${PORT}
|
||||
[ENV] TZ is ${TZ}
|
||||
[ENV] LISTEN_ADDR is ${LISTEN_ADDR}
|
||||
[ENV] ALWAYS_FRESH_INSTALL is ${ALWAYS_FRESH_INSTALL}
|
||||
"
|
||||
@@ -1,49 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
export INSTALL_DIR=/app
|
||||
export APP_NAME=netalertx
|
||||
|
||||
# php-fpm setup
|
||||
install -d -o nginx -g www-data /run/php/
|
||||
sed -i "/^;pid/c\pid = /run/php/php8.3-fpm.pid" /etc/php83/php-fpm.conf
|
||||
sed -i "/^listen/c\listen = /run/php/php8.3-fpm.sock" /etc/php83/php-fpm.d/www.conf
|
||||
sed -i "/^;listen.owner/c\listen.owner = nginx" /etc/php83/php-fpm.d/www.conf
|
||||
sed -i "/^;listen.group/c\listen.group = www-data" /etc/php83/php-fpm.d/www.conf
|
||||
sed -i "/^user/c\user = nginx" /etc/php83/php-fpm.d/www.conf
|
||||
sed -i "/^group/c\group = www-data" /etc/php83/php-fpm.d/www.conf
|
||||
|
||||
# s6 overlay setup
|
||||
mkdir -p /etc/s6-overlay/s6-rc.d/{SetupOneshot,crond/dependencies.d,php-fpm/dependencies.d,nginx/dependencies.d,$APP_NAME/dependencies.d}
|
||||
echo "oneshot" > /etc/s6-overlay/s6-rc.d/SetupOneshot/type
|
||||
echo "longrun" > /etc/s6-overlay/s6-rc.d/crond/type
|
||||
echo "longrun" > /etc/s6-overlay/s6-rc.d/php-fpm/type
|
||||
echo "longrun" > /etc/s6-overlay/s6-rc.d/nginx/type
|
||||
echo "longrun" > /etc/s6-overlay/s6-rc.d/$APP_NAME/type
|
||||
echo -e "${INSTALL_DIR}/dockerfiles/init.sh" > /etc/s6-overlay/s6-rc.d/SetupOneshot/up
|
||||
echo -e '#!/bin/execlineb -P
|
||||
|
||||
if { echo
|
||||
"
|
||||
[INSTALL] Starting crond service...
|
||||
|
||||
" }' > /etc/s6-overlay/s6-rc.d/crond/run
|
||||
echo -e "/usr/sbin/crond -f" >> /etc/s6-overlay/s6-rc.d/crond/run
|
||||
echo -e "#!/bin/execlineb -P\n/usr/sbin/php-fpm83 -F" > /etc/s6-overlay/s6-rc.d/php-fpm/run
|
||||
echo -e '#!/bin/execlineb -P\nnginx -g "daemon off;"' > /etc/s6-overlay/s6-rc.d/nginx/run
|
||||
echo -e '#!/bin/execlineb -P
|
||||
with-contenv
|
||||
|
||||
importas -u PORT PORT
|
||||
|
||||
if { echo
|
||||
"
|
||||
[INSTALL] 🚀 Starting app (:${PORT})
|
||||
|
||||
" }' > /etc/s6-overlay/s6-rc.d/$APP_NAME/run
|
||||
echo -e "python ${INSTALL_DIR}/server" >> /etc/s6-overlay/s6-rc.d/$APP_NAME/run
|
||||
touch /etc/s6-overlay/s6-rc.d/user/contents.d/{SetupOneshot,crond,php-fpm,nginx,$APP_NAME} /etc/s6-overlay/s6-rc.d/{crond,php-fpm,nginx,$APP_NAME}/dependencies.d/SetupOneshot
|
||||
touch /etc/s6-overlay/s6-rc.d/nginx/dependencies.d/php-fpm
|
||||
touch /etc/s6-overlay/s6-rc.d/$APP_NAME/dependencies.d/nginx
|
||||
|
||||
# this removes the current file
|
||||
rm -f $0
|
||||
257
docs/API.md
257
docs/API.md
@@ -1,6 +1,6 @@
|
||||
# API endpoints
|
||||
# API Documentation
|
||||
|
||||
NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the `API_TOKEN` settings as authorization bearer, for example:
|
||||
This API provides programmatic access to **devices, events, sessions, metrics, network tools, and sync** in NetAlertX. It is implemented as a **REST and GraphQL server**. All requests require authentication via **API Token** (`API_TOKEN` setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a `Authorization: Bearer API_TOKEN` header as per example below:
|
||||
|
||||
```graphql
|
||||
curl 'http://host:GRAPHQL_PORT/graphql' \
|
||||
@@ -21,241 +21,62 @@ curl 'http://host:GRAPHQL_PORT/graphql' \
|
||||
}'
|
||||
```
|
||||
|
||||
## API Endpoint: GraphQL
|
||||
The API server runs on `0.0.0.0:<graphql_port>` with **CORS enabled** for all main endpoints.
|
||||
|
||||
- Endpoint URL: `php/server/query_graphql.php`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20212` or as defined by the `GRAPHQL_PORT` setting
|
||||
---
|
||||
|
||||
### Example Query to Fetch Devices
|
||||
## Authentication
|
||||
|
||||
First, let's define the GraphQL query to fetch devices with pagination and sorting options.
|
||||
All endpoints require an API token provided in the HTTP headers:
|
||||
|
||||
```graphql
|
||||
query GetDevices($options: PageQueryOptionsInput) {
|
||||
devices(options: $options) {
|
||||
devices {
|
||||
rowid
|
||||
devMac
|
||||
devName
|
||||
devOwner
|
||||
devType
|
||||
devVendor
|
||||
devLastConnection
|
||||
devStatus
|
||||
}
|
||||
count
|
||||
}
|
||||
}
|
||||
```http
|
||||
Authorization: Bearer <API_TOKEN>
|
||||
```
|
||||
|
||||
See also: [Debugging GraphQL issues](./DEBUG_GRAPHQL.md)
|
||||
|
||||
### `curl` Command
|
||||
|
||||
You can use the following `curl` command to execute the query.
|
||||
|
||||
```sh
|
||||
curl 'http://host:GRAPHQL_PORT/graphql' -X POST -H 'Authorization: Bearer API_TOKEN' -H 'Content-Type: application/json' --data '{
|
||||
"query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
|
||||
"variables": {
|
||||
"options": {
|
||||
"page": 1,
|
||||
"limit": 10,
|
||||
"sort": [{ "field": "devName", "order": "asc" }],
|
||||
"search": "",
|
||||
"status": "connected"
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### Explanation:
|
||||
|
||||
1. **GraphQL Query**:
|
||||
- The `query` parameter contains the GraphQL query as a string.
|
||||
- The `variables` parameter contains the input variables for the query.
|
||||
|
||||
2. **Query Variables**:
|
||||
- `page`: Specifies the page number of results to fetch.
|
||||
- `limit`: Specifies the number of results per page.
|
||||
- `sort`: Specifies the sorting options, with `field` being the field to sort by and `order` being the sort order (`asc` for ascending or `desc` for descending).
|
||||
- `search`: A search term to filter the devices.
|
||||
- `status`: The status filter to apply (valid values are `my_devices` (determined by the `UI_MY_DEVICES` setting), `connected`, `favorites`, `new`, `down`, `archived`, `offline`).
|
||||
|
||||
3. **`curl` Command**:
|
||||
- The `-X POST` option specifies that we are making a POST request.
|
||||
- The `-H "Content-Type: application/json"` option sets the content type of the request to JSON.
|
||||
- The `-d` option provides the request payload, which includes the GraphQL query and variables.
|
||||
|
||||
### Sample Response
|
||||
|
||||
The response will be in JSON format, similar to the following:
|
||||
If the token is missing or invalid, the server will return:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"devices": {
|
||||
"devices": [
|
||||
{
|
||||
"rowid": 1,
|
||||
"devMac": "00:11:22:33:44:55",
|
||||
"devName": "Device 1",
|
||||
"devOwner": "Owner 1",
|
||||
"devType": "Type 1",
|
||||
"devVendor": "Vendor 1",
|
||||
"devLastConnection": "2025-01-01T00:00:00Z",
|
||||
"devStatus": "connected"
|
||||
},
|
||||
{
|
||||
"rowid": 2,
|
||||
"devMac": "66:77:88:99:AA:BB",
|
||||
"devName": "Device 2",
|
||||
"devOwner": "Owner 2",
|
||||
"devType": "Type 2",
|
||||
"devVendor": "Vendor 2",
|
||||
"devLastConnection": "2025-01-02T00:00:00Z",
|
||||
"devStatus": "connected"
|
||||
}
|
||||
],
|
||||
"count": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
{ "error": "Forbidden" }
|
||||
```
|
||||
|
||||
## API Endpoint: JSON files
|
||||
---
|
||||
|
||||
This API endpoint retrieves static files, that are periodically updated.
|
||||
|
||||
- Endpoint URL: `php/server/query_json.php?file=<file name>`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
|
||||
|
||||
### When are the endpoints updated
|
||||
|
||||
The endpoints are updated when objects in the API endpoints are changed.
|
||||
|
||||
### Location of the endpoints
|
||||
|
||||
In the container, these files are located under the `/app/api/` folder. You can access them via the `/php/server/query_json.php?file=user_notifications.json` endpoint.
|
||||
|
||||
### Available endpoints
|
||||
|
||||
You can access the following files:
|
||||
|
||||
| File name | Description |
|
||||
|----------------------|----------------------|
|
||||
| `notification_json_final.json` | The json version of the last notification (e.g. used for webhooks - [sample JSON](https://github.com/jokob-sk/NetAlertX/blob/main/front/report_templates/webhook_json_sample.json)). |
|
||||
| `table_devices.json` | All of the available Devices detected by the app. |
|
||||
| `table_plugins_events.json` | The list of the unprocessed (pending) notification events (plugins_events DB table). |
|
||||
| `table_plugins_history.json` | The list of notification events history. |
|
||||
| `table_plugins_objects.json` | The content of the plugins_objects table. Find more info on the [Plugin system here](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md)|
|
||||
| `language_strings.json` | The content of the language_strings table, which in turn is loaded from the plugins `config.json` definitions. |
|
||||
| `table_custom_endpoint.json` | A custom endpoint generated by the SQL query specified by the `API_CUSTOM_SQL` setting. |
|
||||
| `table_settings.json` | The content of the settings table. |
|
||||
| `app_state.json` | Contains the current application state. |
|
||||
|
||||
|
||||
### JSON Data format
|
||||
|
||||
The endpoints starting with the `table_` prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:
|
||||
|
||||
```JSON
|
||||
{
|
||||
"data": [
|
||||
{
|
||||
"db_column_name": "data",
|
||||
"db_column_name2": "data2"
|
||||
},
|
||||
{
|
||||
"db_column_name": "data3",
|
||||
"db_column_name2": "data4"
|
||||
}
|
||||
]
|
||||
}
|
||||
## Base URL
|
||||
|
||||
```
|
||||
|
||||
Example JSON of the `table_devices.json` endpoint with two Devices (database rows):
|
||||
|
||||
```JSON
|
||||
{
|
||||
"data": [
|
||||
{
|
||||
"devMac": "Internet",
|
||||
"devName": "Net - Huawei",
|
||||
"devType": "Router",
|
||||
"devVendor": null,
|
||||
"devGroup": "Always on",
|
||||
"devFirstConnection": "2021-01-01 00:00:00",
|
||||
"devLastConnection": "2021-01-28 22:22:11",
|
||||
"devLastIP": "192.168.1.24",
|
||||
"devStaticIP": 0,
|
||||
"devPresentLastScan": 1,
|
||||
"devLastNotification": "2023-01-28 22:22:28.998715",
|
||||
"devIsNew": 0,
|
||||
"devParentMAC": "",
|
||||
"devParentPort": "",
|
||||
"devIcon": "globe"
|
||||
},
|
||||
{
|
||||
"devMac": "a4:8f:ff:aa:ba:1f",
|
||||
"devName": "Net - USG",
|
||||
"devType": "Firewall",
|
||||
"devVendor": "Ubiquiti Inc",
|
||||
"devGroup": "",
|
||||
"devFirstConnection": "2021-02-12 22:05:00",
|
||||
"devLastConnection": "2021-07-17 15:40:00",
|
||||
"devLastIP": "192.168.1.1",
|
||||
"devStaticIP": 1,
|
||||
"devPresentLastScan": 1,
|
||||
"devLastNotification": "2021-07-17 15:40:10.667717",
|
||||
"devIsNew": 0,
|
||||
"devParentMAC": "Internet",
|
||||
"devParentPort": 1,
|
||||
"devIcon": "shield-halved"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
http://<server>:<GRAPHQL_PORT>/
|
||||
```
|
||||
|
||||
## API Endpoint: /log files
|
||||
---
|
||||
|
||||
This API endpoint retrieves files from the `/app/log` folder.
|
||||
## Endpoints
|
||||
|
||||
- Endpoint URL: `php/server/query_logs.php?file=<file name>`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
|
||||
> [!TIP]
|
||||
> When retrieving devices or settings try using the GraphQL API endpoint first as it is read-optimized.
|
||||
|
||||
| File | Description |
|
||||
|--------------------------|---------------------------------------------------------------|
|
||||
| `IP_changes.log` | Logs of IP address changes |
|
||||
| `app.log` | Main application log |
|
||||
| `app.php_errors.log` | PHP error log |
|
||||
| `app_front.log` | Frontend application log |
|
||||
| `app_nmap.log` | Logs of Nmap scan results |
|
||||
| `db_is_locked.log` | Logs when the database is locked |
|
||||
| `execution_queue.log` | Logs of execution queue activities |
|
||||
| `plugins/` | Directory for temporary plugin-related files (not accessible) |
|
||||
| `report_output.html` | HTML report output |
|
||||
| `report_output.json` | JSON format report output |
|
||||
| `report_output.txt` | Text format report output |
|
||||
| `stderr.log` | Logs of standard error output |
|
||||
| `stdout.log` | Logs of standard output |
|
||||
* [Device API Endpoints](API_DEVICE.md) – Manage individual devices
|
||||
* [Devices Collection](API_DEVICES.md) – Bulk operations on multiple devices
|
||||
* [Events](API_EVENTS.md) – Device event logging and management
|
||||
* [Sessions](API_SESSIONS.md) – Connection sessions and history
|
||||
* [Settings](API_SETTINGS.md) – Settings
|
||||
* Messaging:
|
||||
* [In app messaging](API_MESSAGING_IN_APP.md) - In-app messaging
|
||||
* [Metrics](API_METRICS.md) – Prometheus metrics and per-device status
|
||||
* [Network Tools](API_NETTOOLS.md) – Utilities like Wake-on-LAN, traceroute, nslookup, nmap, and internet info
|
||||
* [Online History](API_ONLINEHISTORY.md) – Online/offline device records
|
||||
* [GraphQL](API_GRAPHQL.md) – Advanced queries and filtering for Devices, Settings and Language Strings
|
||||
* [Sync](API_SYNC.md) – Synchronization between multiple NetAlertX instances
|
||||
* [Logs](API_LOGS.md) – Purging of logs and adding to the event execution queue for user triggered events
|
||||
* [DB query](API_DBQUERY.md) (⚠ Internal) - Low level database access - use other endpoints if possible
|
||||
|
||||
See [Testing](API_TESTS.md) for example requests and usage.
|
||||
|
||||
## API Endpoint: /config files
|
||||
---
|
||||
|
||||
To retrieve files from the `/app/config` folder.
|
||||
|
||||
- Endpoint URL: `php/server/query_config.php?file=<file name>`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
|
||||
|
||||
| File | Description |
|
||||
|--------------------------|--------------------------------------------------|
|
||||
| `devices.csv` | Devices csv file |
|
||||
| `app.conf` | Application config file |
|
||||
## Notes
|
||||
|
||||
* All endpoints enforce **Bearer token authentication**.
|
||||
* Errors return JSON with `success: False` and an error message.
|
||||
* GraphQL is available for advanced queries, while REST endpoints cover structured use cases.
|
||||
* Endpoints run on `0.0.0.0:<GRAPHQL_PORT>` with **CORS enabled**.
|
||||
* Use consistent API tokens and node/plugin names when interacting with `/sync` to ensure data integrity.
|
||||
|
||||
183
docs/API_DBQUERY.md
Executable file
183
docs/API_DBQUERY.md
Executable file
@@ -0,0 +1,183 @@
|
||||
# Database Query API
|
||||
|
||||
The **Database Query API** provides direct, low-level access to the NetAlertX database. It allows **read, write, update, and delete** operations against tables, using **base64-encoded** SQL or structured parameters.
|
||||
|
||||
> [!Warning]
|
||||
> This API is primarily used internally to generate and render the application UI. These endpoints are low-level and powerful, and should be used with caution. Wherever possible, prefer the [standard API endpoints](API.md). Invalid or unsafe queries can corrupt data.
|
||||
> If you need data in a specific format that is not already provided, please open an issue or pull request with a clear, broadly useful use case. This helps ensure new endpoints benefit the wider community rather than relying on raw database queries.
|
||||
|
||||
---
|
||||
|
||||
## Authentication
|
||||
|
||||
All `/dbquery/*` endpoints require an API token in the HTTP headers:
|
||||
|
||||
```http
|
||||
Authorization: Bearer <API_TOKEN>
|
||||
```
|
||||
|
||||
If the token is missing or invalid:
|
||||
|
||||
```json
|
||||
{ "error": "Forbidden" }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
### 1. `POST /dbquery/read`
|
||||
|
||||
Execute a **read-only** SQL query (e.g., `SELECT`).
|
||||
|
||||
#### Request Body
|
||||
|
||||
```json
|
||||
{
|
||||
"rawSql": "U0VMRUNUICogRlJPTSBERVZJQ0VT" // base64 encoded SQL
|
||||
}
|
||||
```
|
||||
|
||||
Decoded SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM Devices;
|
||||
```
|
||||
|
||||
#### Response
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"results": [
|
||||
{ "devMac": "AA:BB:CC:DD:EE:FF", "devName": "Phone" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/read" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"rawSql": "U0VMRUNUICogRlJPTSBERVZJQ0VT"
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. `POST /dbquery/update` (safer than `/dbquery/write`)
|
||||
|
||||
Update rows in a table by `columnName` + `id`. `/dbquery/update` is parameterized to reduce the risk of SQL injection, while `/dbquery/write` executes raw SQL directly.
|
||||
|
||||
#### Request Body
|
||||
|
||||
```json
|
||||
{
|
||||
"columnName": "devMac",
|
||||
"id": ["AA:BB:CC:DD:EE:FF"],
|
||||
"dbtable": "Devices",
|
||||
"columns": ["devName", "devOwner"],
|
||||
"values": ["Laptop", "Alice"]
|
||||
}
|
||||
```
|
||||
|
||||
#### Response
|
||||
|
||||
```json
|
||||
{ "success": true, "updated_count": 1 }
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/update" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"columnName": "devMac",
|
||||
"id": ["AA:BB:CC:DD:EE:FF"],
|
||||
"dbtable": "Devices",
|
||||
"columns": ["devName", "devOwner"],
|
||||
"values": ["Laptop", "Alice"]
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. `POST /dbquery/write`
|
||||
|
||||
Execute a **write query** (`INSERT`, `UPDATE`, `DELETE`).
|
||||
|
||||
#### Request Body
|
||||
|
||||
```json
|
||||
{
|
||||
"rawSql": "SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk="
|
||||
}
|
||||
```
|
||||
|
||||
Decoded SQL:
|
||||
|
||||
```sql
|
||||
INSERT INTO Devices (devMac, devName, devFirstConnection, devLastConnection, devLastIP)
|
||||
VALUES ('6A:BB:4C:5D:6E', 'TestDevice', '2025-08-30 12:00:00', '2025-08-30 12:00:00', '10.0.0.10');
|
||||
```
|
||||
|
||||
#### Response
|
||||
|
||||
```json
|
||||
{ "success": true, "affected_rows": 1 }
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/write" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"rawSql": "SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk="
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. `POST /dbquery/delete`
|
||||
|
||||
Delete rows in a table by `columnName` + `id`.
|
||||
|
||||
#### Request Body
|
||||
|
||||
```json
|
||||
{
|
||||
"columnName": "devMac",
|
||||
"id": ["AA:BB:CC:DD:EE:FF"],
|
||||
"dbtable": "Devices"
|
||||
}
|
||||
```
|
||||
|
||||
#### Response
|
||||
|
||||
```json
|
||||
{ "success": true, "deleted_count": 1 }
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/dbquery/delete" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"columnName": "devMac",
|
||||
"id": ["AA:BB:CC:DD:EE:FF"],
|
||||
"dbtable": "Devices"
|
||||
}'
|
||||
```
|
||||
233
docs/API_DEVICE.md
Executable file
233
docs/API_DEVICE.md
Executable file
@@ -0,0 +1,233 @@
|
||||
# Device API Endpoints
|
||||
|
||||
Manage a **single device** by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require **authorization** via Bearer token.
|
||||
|
||||
---
|
||||
|
||||
## 1. Retrieve Device Details
|
||||
|
||||
* **GET** `/device/<mac>`
|
||||
Fetch all details for a single device, including:
|
||||
|
||||
* Computed status (`devStatus`) → `On-line`, `Off-line`, or `Down`
|
||||
* Session and event counts (`devSessions`, `devEvents`, `devDownAlerts`)
|
||||
* Presence hours (`devPresenceHours`)
|
||||
* Children devices (`devChildrenDynamic`) and NIC children (`devChildrenNicsDynamic`)
|
||||
|
||||
**Special case**: `mac=new` returns a template for a new device with default values.
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"devMac": "AA:BB:CC:DD:EE:FF",
|
||||
"devName": "Net - Huawei",
|
||||
"devOwner": "Admin",
|
||||
"devType": "Router",
|
||||
"devVendor": "Huawei",
|
||||
"devStatus": "On-line",
|
||||
"devSessions": 12,
|
||||
"devEvents": 5,
|
||||
"devDownAlerts": 1,
|
||||
"devPresenceHours": 32,
|
||||
"devChildrenDynamic": [...],
|
||||
"devChildrenNicsDynamic": [...],
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Device not found → HTTP 404
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
## 2. Update Device Fields
|
||||
|
||||
* **POST** `/device/<mac>`
|
||||
Create or update a device record.
|
||||
|
||||
**Request Body**:
|
||||
|
||||
```json
|
||||
{
|
||||
"devName": "New Device",
|
||||
"devOwner": "Admin",
|
||||
"createNew": true
|
||||
}
|
||||
```
|
||||
|
||||
**Behavior**:
|
||||
|
||||
* If `createNew=true` → creates a new device
|
||||
* Otherwise → updates existing device fields
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
## 3. Delete a Device
|
||||
|
||||
* **DELETE** `/device/<mac>/delete`
|
||||
Deletes the device with the given MAC.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
## 4. Delete All Events for a Device
|
||||
|
||||
* **DELETE** `/device/<mac>/events/delete`
|
||||
Removes all events associated with a device.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Reset Device Properties
|
||||
|
||||
* **POST** `/device/<mac>/reset-props`
|
||||
Resets the device's custom properties to default values.
|
||||
|
||||
**Request Body**: Optional JSON for additional parameters.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Copy Device Data
|
||||
|
||||
* **POST** `/device/copy`
|
||||
Copy all data from one device to another. If a device exists with `macTo`, it is replaced.
|
||||
|
||||
**Request Body**:
|
||||
|
||||
```json
|
||||
{
|
||||
"macFrom": "AA:BB:CC:DD:EE:FF",
|
||||
"macTo": "11:22:33:44:55:66"
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66"
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Missing `macFrom` or `macTo` → HTTP 400
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
## 7. Update a Single Column
|
||||
|
||||
* **POST** `/device/<mac>/update-column`
|
||||
Update one specific column for a device.
|
||||
|
||||
**Request Body**:
|
||||
|
||||
```json
|
||||
{
|
||||
"columnName": "devName",
|
||||
"columnValue": "Updated Device Name"
|
||||
}
|
||||
```
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Device not found → HTTP 404
|
||||
* Missing `columnName` or `columnValue` → HTTP 400
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
## Example `curl` Requests
|
||||
|
||||
**Get Device Details**:
|
||||
|
||||
```bash
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Update Device Fields**:
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"devName": "New Device Name"}'
|
||||
```
|
||||
|
||||
**Delete Device**:
|
||||
|
||||
```bash
|
||||
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/delete" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Copy Device Data**:
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/copy" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"macFrom":"AA:BB:CC:DD:EE:FF","macTo":"11:22:33:44:55:66"}'
|
||||
```
|
||||
|
||||
**Update Single Column**:
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/update-column" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"columnName":"devName","columnValue":"Updated Device"}'
|
||||
```
|
||||
|
||||
249
docs/API_DEVICES.md
Executable file
249
docs/API_DEVICES.md
Executable file
@@ -0,0 +1,249 @@
|
||||
# Devices Collection API Endpoints
|
||||
|
||||
The Devices Collection API provides operations to **retrieve, manage, import/export, and filter devices** in bulk. All endpoints require **authorization** via Bearer token.
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
### 1. Get All Devices
|
||||
|
||||
* **GET** `/devices`
|
||||
Retrieves all devices from the database.
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"devices": [
|
||||
{
|
||||
"devName": "Net - Huawei",
|
||||
"devMAC": "AA:BB:CC:DD:EE:FF",
|
||||
"devIP": "192.168.1.1",
|
||||
"devType": "Router",
|
||||
"devFavorite": 0,
|
||||
"devStatus": "online"
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
### 2. Delete Devices by MAC
|
||||
|
||||
* **DELETE** `/devices`
|
||||
Deletes devices by MAC address. Supports exact matches or wildcard `*`.
|
||||
|
||||
**Request Body**:
|
||||
|
||||
```json
|
||||
{
|
||||
"macs": ["AA:BB:CC:DD:EE:FF", "11:22:33:*"]
|
||||
}
|
||||
```
|
||||
|
||||
**Behavior**:
|
||||
|
||||
* If `macs` is omitted or `null` → deletes **all devices**.
|
||||
* Wildcards `*` match multiple devices.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"deleted_count": 5
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
### 3. Delete Devices with Empty MACs
|
||||
|
||||
* **DELETE** `/devices/empty-macs`
|
||||
Removes all devices where MAC address is null or empty.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"deleted": 3
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Delete Unknown Devices
|
||||
|
||||
* **DELETE** `/devices/unknown`
|
||||
Deletes devices with names marked as `(unknown)` or `(name not found)`.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"deleted": 2
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Export Devices
|
||||
|
||||
* **GET** `/devices/export` or `/devices/export/<format>`
|
||||
Exports all devices in **CSV** (default) or **JSON** format.
|
||||
|
||||
**Query Parameter / URL Parameter**:
|
||||
|
||||
* `format` (optional) → `csv` (default) or `json`
|
||||
|
||||
**CSV Response**:
|
||||
|
||||
* Returns as a downloadable CSV file: `Content-Disposition: attachment; filename=devices.csv`
|
||||
|
||||
**JSON Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": [
|
||||
{ "devName": "Net - Huawei", "devMAC": "AA:BB:CC:DD:EE:FF", ... },
|
||||
...
|
||||
],
|
||||
"columns": ["devName", "devMAC", "devIP", "devType", "devFavorite", "devStatus"]
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Unsupported format → HTTP 400
|
||||
|
||||
---
|
||||
|
||||
### 6. Import Devices from CSV
|
||||
|
||||
* **POST** `/devices/import`
|
||||
Imports devices from an uploaded CSV or base64-encoded CSV content.
|
||||
|
||||
**Request Body** (multipart file or JSON with `content` field):
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "<base64-encoded CSV content>"
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"inserted": 25,
|
||||
"skipped_lines": [3, 7]
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Missing file or content → HTTP 400 / 404
|
||||
* CSV malformed → HTTP 400
|
||||
|
||||
---
|
||||
|
||||
### 7. Get Device Totals
|
||||
|
||||
* **GET** `/devices/totals`
|
||||
Returns counts of devices by various categories.
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
[
|
||||
120, // Total devices
|
||||
85, // Connected
|
||||
5, // Favorites
|
||||
10, // New
|
||||
8, // Down
|
||||
12 // Archived
|
||||
]
|
||||
```
|
||||
|
||||
*Order: `[all, connected, favorites, new, down, archived]`*
|
||||
|
||||
---
|
||||
|
||||
### 8. Get Devices by Status
|
||||
|
||||
* **GET** `/devices/by-status?status=<status>`
|
||||
Returns devices filtered by status.
|
||||
|
||||
**Query Parameter**:
|
||||
|
||||
* `status` → Supported values: `online`, `offline`, `down`, `archived`, `favorites`, `new`, `my`
|
||||
* If omitted, returns **all devices**.
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
[
|
||||
{ "id": "AA:BB:CC:DD:EE:FF", "title": "Net - Huawei", "favorite": 0 },
|
||||
{ "id": "11:22:33:44:55:66", "title": "★ USG Firewall", "favorite": 1 }
|
||||
]
|
||||
```
|
||||
|
||||
*If `devFavorite=1`, the title is prepended with a star `★`.*
|
||||
|
||||
---
|
||||
|
||||
## Example `curl` Requests
|
||||
|
||||
**Get All Devices**:
|
||||
|
||||
```sh
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Delete Devices by MAC**:
|
||||
|
||||
```sh
|
||||
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/devices" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"macs":["AA:BB:CC:DD:EE:FF","11:22:33:*"]}'
|
||||
```
|
||||
|
||||
**Export Devices CSV**:
|
||||
|
||||
```sh
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices/export?format=csv" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Import Devices from CSV**:
|
||||
|
||||
```sh
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/devices/import" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-F "file=@devices.csv"
|
||||
```
|
||||
|
||||
**Get Devices by Status**:
|
||||
|
||||
```sh
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/devices/by-status?status=online" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
169
docs/API_EVENTS.md
Executable file
169
docs/API_EVENTS.md
Executable file
@@ -0,0 +1,169 @@
|
||||
# Events API Endpoints
|
||||
|
||||
The Events API provides access to **device event logs**, allowing creation, retrieval, deletion, and summary of events over time.
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
### 1. Create Event
|
||||
|
||||
* **POST** `/events/create/<mac>`
|
||||
Create an event for a device identified by its MAC address.
|
||||
|
||||
**Request Body** (JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"ip": "192.168.1.10",
|
||||
"event_type": "Device Down",
|
||||
"additional_info": "Optional info about the event",
|
||||
"pending_alert": 1,
|
||||
"event_time": "2025-08-24T12:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
* **Parameters**:
|
||||
|
||||
* `ip` (string, optional): IP address of the device
|
||||
* `event_type` (string, optional): Type of event (default `"Device Down"`)
|
||||
* `additional_info` (string, optional): Extra information
|
||||
* `pending_alert` (int, optional): 1 if alert email is pending (default 1)
|
||||
* `event_time` (ISO datetime, optional): Event timestamp; defaults to current time
|
||||
|
||||
**Response** (JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Event created for 00:11:22:33:44:55"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Get Events
|
||||
|
||||
* **GET** `/events`
|
||||
Retrieve all events, optionally filtered by MAC address:
|
||||
|
||||
```
|
||||
/events?mac=<mac>
|
||||
```
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"events": [
|
||||
{
|
||||
"eve_MAC": "00:11:22:33:44:55",
|
||||
"eve_IP": "192.168.1.10",
|
||||
"eve_DateTime": "2025-08-24T12:00:00Z",
|
||||
"eve_EventType": "Device Down",
|
||||
"eve_AdditionalInfo": "",
|
||||
"eve_PendingAlertEmail": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Delete Events
|
||||
|
||||
* **DELETE** `/events/<mac>` → Delete events for a specific MAC
|
||||
* **DELETE** `/events` → Delete **all** events
|
||||
* **DELETE** `/events/<days>` → Delete events older than N days
|
||||
|
||||
**Response**:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Deleted events older than <days> days"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Event Totals Over a Period
|
||||
|
||||
* **GET** `/sessions/totals?period=<period>`
|
||||
Return event and session totals over a given period.
|
||||
|
||||
**Query Parameters**:
|
||||
|
||||
| Parameter | Description |
|
||||
| --------- | -------------------------------------------------------------------------------- |
|
||||
| `period` | Time period for totals, e.g., `"7 days"`, `"1 month"`, `"1 year"`, `"100 years"` |
|
||||
|
||||
**Sample Response** (JSON Array):
|
||||
|
||||
```json
|
||||
[120, 85, 5, 10, 3, 7]
|
||||
```
|
||||
|
||||
**Meaning of Values**:
|
||||
|
||||
1. Total events in the period
|
||||
2. Total sessions
|
||||
3. Missing sessions
|
||||
4. Voided events (`eve_EventType LIKE 'VOIDED%'`)
|
||||
5. New device events (`eve_EventType LIKE 'New Device'`)
|
||||
6. Device down events (`eve_EventType LIKE 'Device Down'`)
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
* All endpoints require **authorization** (Bearer token). Unauthorized requests return:
|
||||
|
||||
```json
|
||||
{ "error": "Forbidden" }
|
||||
```
|
||||
|
||||
* Events are stored in the **Events table** with the following fields:
|
||||
`eve_MAC`, `eve_IP`, `eve_DateTime`, `eve_EventType`, `eve_AdditionalInfo`, `eve_PendingAlertEmail`.
|
||||
|
||||
* Event creation automatically logs activity for debugging.
|
||||
|
||||
---
|
||||
|
||||
## Example `curl` Requests
|
||||
|
||||
**Create Event**:
|
||||
|
||||
```sh
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/events/create/00:11:22:33:44:55" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{
|
||||
"ip": "192.168.1.10",
|
||||
"event_type": "Device Down",
|
||||
"additional_info": "Power outage",
|
||||
"pending_alert": 1
|
||||
}'
|
||||
```
|
||||
|
||||
**Get Events for a Device**:
|
||||
|
||||
```sh
|
||||
curl "http://<server_ip>:<GRAPHQL_PORT>/events?mac=00:11:22:33:44:55" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Delete Events Older Than 30 Days**:
|
||||
|
||||
```sh
|
||||
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/events/30" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Get Event Totals for 7 Days**:
|
||||
|
||||
```sh
|
||||
curl "http://<server_ip>:<GRAPHQL_PORT>/sessions/totals?period=7 days" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
264
docs/API_GRAPHQL.md
Executable file
264
docs/API_GRAPHQL.md
Executable file
@@ -0,0 +1,264 @@
|
||||
# GraphQL API Endpoint
|
||||
|
||||
GraphQL queries are **read-optimized for speed**. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:
|
||||
|
||||
* Devices
|
||||
* Settings
|
||||
* Language Strings (LangStrings)
|
||||
|
||||
## Endpoints
|
||||
|
||||
* **GET** `/graphql`
|
||||
Returns a simple status message (useful for browser or debugging).
|
||||
|
||||
* **POST** `/graphql`
|
||||
Execute GraphQL queries against the `devicesSchema`.
|
||||
|
||||
---
|
||||
|
||||
## Devices Query
|
||||
|
||||
### Sample Query
|
||||
|
||||
```graphql
|
||||
query GetDevices($options: PageQueryOptionsInput) {
|
||||
devices(options: $options) {
|
||||
devices {
|
||||
rowid
|
||||
devMac
|
||||
devName
|
||||
devOwner
|
||||
devType
|
||||
devVendor
|
||||
devLastConnection
|
||||
devStatus
|
||||
}
|
||||
count
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Query Parameters
|
||||
|
||||
| Parameter | Description |
|
||||
| --------- | ------------------------------------------------------------------------------------------------------- |
|
||||
| `page` | Page number of results to fetch. |
|
||||
| `limit` | Number of results per page. |
|
||||
| `sort` | Sorting options (`field` = field name, `order` = `asc` or `desc`). |
|
||||
| `search` | Term to filter devices. |
|
||||
| `status` | Filter devices by status: `my_devices`, `connected`, `favorites`, `new`, `down`, `archived`, `offline`. |
|
||||
| `filters` | Additional filters (array of `{ filterColumn, filterValue }`). |
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example
|
||||
|
||||
```sh
|
||||
curl 'http://host:GRAPHQL_PORT/graphql' \
|
||||
-X POST \
|
||||
-H 'Authorization: Bearer API_TOKEN' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
|
||||
"variables": {
|
||||
"options": {
|
||||
"page": 1,
|
||||
"limit": 10,
|
||||
"sort": [{ "field": "devName", "order": "asc" }],
|
||||
"search": "",
|
||||
"status": "connected"
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Sample Response
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"devices": {
|
||||
"devices": [
|
||||
{
|
||||
"rowid": 1,
|
||||
"devMac": "00:11:22:33:44:55",
|
||||
"devName": "Device 1",
|
||||
"devOwner": "Owner 1",
|
||||
"devType": "Type 1",
|
||||
"devVendor": "Vendor 1",
|
||||
"devLastConnection": "2025-01-01T00:00:00Z",
|
||||
"devStatus": "connected"
|
||||
}
|
||||
],
|
||||
"count": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Settings Query
|
||||
|
||||
The **settings query** provides access to NetAlertX configuration stored in the settings table.
|
||||
|
||||
### Sample Query
|
||||
|
||||
```graphql
|
||||
query GetSettings {
|
||||
settings {
|
||||
settings {
|
||||
setKey
|
||||
setName
|
||||
setDescription
|
||||
setType
|
||||
setOptions
|
||||
setGroup
|
||||
setValue
|
||||
setEvents
|
||||
setOverriddenByEnv
|
||||
}
|
||||
count
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Schema Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
| -------------------- | ------- | ------------------------------------------------------------------------ |
|
||||
| `setKey` | String | Unique key identifier for the setting. |
|
||||
| `setName` | String | Human-readable name. |
|
||||
| `setDescription` | String | Description or documentation of the setting. |
|
||||
| `setType` | String | Data type (`string`, `int`, `bool`, `json`, etc.). |
|
||||
| `setOptions` | String | Available options (for dropdown/select-type settings). |
|
||||
| `setGroup` | String | Group/category the setting belongs to. |
|
||||
| `setValue` | String | Current value of the setting. |
|
||||
| `setEvents` | String | Events or triggers related to this setting. |
|
||||
| `setOverriddenByEnv` | Boolean | Whether the setting is overridden by an environment variable at runtime. |
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example
|
||||
|
||||
```sh
|
||||
curl 'http://host:GRAPHQL_PORT/graphql' \
|
||||
-X POST \
|
||||
-H 'Authorization: Bearer API_TOKEN' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"query": "query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }"
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Sample Response
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"settings": {
|
||||
"settings": [
|
||||
{
|
||||
"setKey": "UI_MY_DEVICES",
|
||||
"setName": "My Devices Filter",
|
||||
"setDescription": "Defines which statuses to include in the 'My Devices' view.",
|
||||
"setType": "list",
|
||||
"setOptions": "[\"online\",\"new\",\"down\",\"offline\",\"archived\"]",
|
||||
"setGroup": "UI",
|
||||
"setValue": "[\"online\",\"new\"]",
|
||||
"setEvents": null,
|
||||
"setOverriddenByEnv": false
|
||||
},
|
||||
{
|
||||
"setKey": "NETWORK_DEVICE_TYPES",
|
||||
"setName": "Network Device Types",
|
||||
"setDescription": "Types of devices considered as network infrastructure.",
|
||||
"setType": "list",
|
||||
"setOptions": "[\"Router\",\"Switch\",\"AP\"]",
|
||||
"setGroup": "Network",
|
||||
"setValue": "[\"Router\",\"Switch\"]",
|
||||
"setEvents": null,
|
||||
"setOverriddenByEnv": true
|
||||
}
|
||||
],
|
||||
"count": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
---
|
||||
|
||||
## LangStrings Query
|
||||
|
||||
The **LangStrings query** provides access to localized strings. Supports filtering by `langCode` and `langStringKey`. If the requested string is missing or empty, you can optionally fallback to `en_us`.
|
||||
|
||||
### Sample Query
|
||||
|
||||
```graphql
|
||||
query GetLangStrings {
|
||||
langStrings(langCode: "de_de", langStringKey: "settings_other_scanners") {
|
||||
langStrings {
|
||||
langCode
|
||||
langStringKey
|
||||
langStringText
|
||||
}
|
||||
count
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Query Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| ---------------- | ------- | ---------------------------------------------------------------------------------------- |
|
||||
| `langCode` | String | Optional language code (e.g., `en_us`, `de_de`). If omitted, all languages are returned. |
|
||||
| `langStringKey` | String | Optional string key to retrieve a specific entry. |
|
||||
| `fallback_to_en` | Boolean | Optional (default `true`). If `true`, empty or missing strings fallback to `en_us`. |
|
||||
|
||||
### `curl` Example
|
||||
|
||||
```sh
|
||||
curl 'http://host:GRAPHQL_PORT/graphql' \
|
||||
-X POST \
|
||||
-H 'Authorization: Bearer API_TOKEN' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"query": "query GetLangStrings { langStrings(langCode: \"de_de\", langStringKey: \"settings_other_scanners\") { langStrings { langCode langStringKey langStringText } count } }"
|
||||
}'
|
||||
```
|
||||
|
||||
### Sample Response
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"langStrings": {
|
||||
"count": 1,
|
||||
"langStrings": [
|
||||
{
|
||||
"langCode": "de_de",
|
||||
"langStringKey": "settings_other_scanners",
|
||||
"langStringText": "Other, non-device scanner plugins that are currently enabled." // falls back to en_us if empty
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
* Device, settings, and LangStrings queries can be combined in **one request** since GraphQL supports batching.
|
||||
* The `fallback_to_en` feature ensures UI always has a value even if a translation is missing.
|
||||
* Data is **cached in memory** per JSON file; changes to language or plugin files will only refresh after the cache detects a file modification.
|
||||
* The `setOverriddenByEnv` flag helps identify setting values that are locked at container runtime.
|
||||
* The schema is **read-only** — updates must be performed through other APIs or configuration management. See the other [API](API.md) endpoints for details.
|
||||
|
||||
179
docs/API_LOGS.md
Normal file
179
docs/API_LOGS.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# Logs API Endpoints
|
||||
|
||||
Manage or purge application log files stored under `/app/log` and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.
|
||||
|
||||
Only specific, pre-approved log files can be purged for security and stability reasons.
|
||||
|
||||
---
|
||||
|
||||
## Delete (Purge) a Log File
|
||||
|
||||
* **DELETE** `/logs?file=<log_file>` → Purge the contents of an allowed log file.
|
||||
|
||||
**Query Parameter:**
|
||||
|
||||
* `file` → The name of the log file to purge (e.g., `app.log`, `stdout.log`)
|
||||
|
||||
**Allowed Files:**
|
||||
|
||||
```
|
||||
app.log
|
||||
app_front.log
|
||||
IP_changes.log
|
||||
stdout.log
|
||||
stderr.log
|
||||
app.php_errors.log
|
||||
execution_queue.log
|
||||
db_is_locked.log
|
||||
```
|
||||
|
||||
**Authorization:**
|
||||
Requires a valid API token in the `Authorization` header.
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example (Success)
|
||||
|
||||
```sh
|
||||
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Accept: application/json'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "[clean_log] File app.log purged successfully"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example (Not Allowed)
|
||||
|
||||
```sh
|
||||
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=not_allowed.log' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Accept: application/json'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"message": "[clean_log] File not_allowed.log is not allowed to be purged"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example (Unauthorized)
|
||||
|
||||
```sh
|
||||
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \
|
||||
-H 'Accept: application/json'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "Forbidden"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Add an Action to the Execution Queue
|
||||
|
||||
* **POST** `/logs/add-to-execution-queue` → Add a system action to the execution queue.
|
||||
|
||||
**Request Body (JSON):**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "update_api|devices"
|
||||
}
|
||||
```
|
||||
|
||||
**Authorization:**
|
||||
Requires a valid API token in the `Authorization` header.
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example (Success)
|
||||
|
||||
The below will update the API cache for Devices
|
||||
|
||||
```sh
|
||||
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"action": "update_api|devices"}'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "[UserEventsQueueInstance] Action \"update_api|devices\" added to the execution queue."
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example (Missing Parameter)
|
||||
|
||||
```sh
|
||||
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{}'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"message": "Missing parameters",
|
||||
"error": "Missing required 'action' field in JSON body"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `curl` Example (Unauthorized)
|
||||
|
||||
```sh
|
||||
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"action": "update_api|devices"}'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "Forbidden"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
* Only predefined files in `/app/log` can be purged — arbitrary paths are **not permitted**.
|
||||
* When a log file is purged:
|
||||
|
||||
* Its content is replaced with a short marker text: `"File manually purged"`.
|
||||
* A backend log entry is created via `mylog()`.
|
||||
* A frontend notification is generated via `write_notification()`.
|
||||
* Execution queue actions are appended to `execution_queue.log` and can be processed asynchronously by background tasks or workflows.
|
||||
* Unauthorized or invalid attempts are safely logged and rejected.
|
||||
* For advanced log retrieval, analysis, or structured querying, use the frontend log viewer.
|
||||
* Always ensure that sensitive or production logs are handled carefully — purging cannot be undone.
|
||||
173
docs/API_MESSAGING_IN_APP.md
Executable file
173
docs/API_MESSAGING_IN_APP.md
Executable file
@@ -0,0 +1,173 @@
|
||||
# In-app Notifications API
|
||||
|
||||
Manage in-app notifications for users. Notifications can be written, retrieved, marked as read, or deleted.
|
||||
|
||||
---
|
||||
|
||||
### Write Notification
|
||||
|
||||
* **POST** `/messaging/in-app/write` → Create a new in-app notification.
|
||||
|
||||
**Request Body:**
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "This is a test notification",
|
||||
"level": "alert" // optional, ["interrupt","info","alert"] default: "alert"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/write" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"content": "This is a test notification",
|
||||
"level": "alert"
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Get Unread Notifications
|
||||
|
||||
* **GET** `/messaging/in-app/unread` → Retrieve all unread notifications.
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"timestamp": "2025-10-10T12:34:56",
|
||||
"guid": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
|
||||
"read": 0,
|
||||
"level": "alert",
|
||||
"content": "This is a test notification"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/unread" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Mark All Notifications as Read
|
||||
|
||||
* **POST** `/messaging/in-app/read/all` → Mark all notifications as read.
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/all" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Mark Single Notification as Read
|
||||
|
||||
* **POST** `/messaging/in-app/read/<guid>` → Mark a single notification as read using its GUID.
|
||||
|
||||
**Response (success):**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
**Response (failure):**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"error": "Notification not found"
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Delete All Notifications
|
||||
|
||||
* **DELETE** `/messaging/in-app/delete` → Remove all notifications from the system.
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Delete Single Notification
|
||||
|
||||
* **DELETE** `/messaging/in-app/delete/<guid>` → Remove a single notification by its GUID.
|
||||
|
||||
**Response (success):**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true
|
||||
}
|
||||
```
|
||||
|
||||
**Response (failure):**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"error": "Notification not found"
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
103
docs/API_METRICS.md
Executable file
103
docs/API_METRICS.md
Executable file
@@ -0,0 +1,103 @@
|
||||
# Metrics API Endpoint
|
||||
|
||||
The `/metrics` endpoint exposes **Prometheus-compatible metrics** for NetAlertX, including aggregate device counts and per-device status.
|
||||
|
||||
---
|
||||
|
||||
## Endpoint Details
|
||||
|
||||
* **GET** `/metrics` → Returns metrics in plain text.
|
||||
* **Host**: NetAlertX server
|
||||
* **Port**: As configured in `GRAPHQL_PORT` (default: `20212`)
|
||||
|
||||
---
|
||||
|
||||
## Example Output
|
||||
|
||||
```text
|
||||
netalertx_connected_devices 31
|
||||
netalertx_offline_devices 54
|
||||
netalertx_down_devices 0
|
||||
netalertx_new_devices 0
|
||||
netalertx_archived_devices 31
|
||||
netalertx_favorite_devices 2
|
||||
netalertx_my_devices 54
|
||||
|
||||
netalertx_device_status{device="Net - Huawei", mac="Internet", ip="1111.111.111.111", vendor="None", first_connection="2021-01-01 00:00:00", last_connection="2025-08-04 17:57:00", dev_type="Router", device_status="Online"} 1
|
||||
netalertx_device_status{device="Net - USG", mac="74:ac:74:ac:74:ac", ip="192.168.1.1", vendor="Ubiquiti Networks Inc.", first_connection="2022-02-12 22:05:00", last_connection="2025-06-07 08:16:49", dev_type="Firewall", device_status="Archived"} 1
|
||||
netalertx_device_status{device="Raspberry Pi 4 LAN", mac="74:ac:74:ac:74:74", ip="192.168.1.9", vendor="Raspberry Pi Trading Ltd", first_connection="2022-02-12 22:05:00", last_connection="2025-08-04 17:57:00", dev_type="Singleboard Computer (SBC)", device_status="Online"} 1
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Metrics Overview
|
||||
|
||||
### 1. Aggregate Device Counts
|
||||
|
||||
| Metric | Description |
|
||||
| ----------------------------- | ---------------------------------------- |
|
||||
| `netalertx_connected_devices` | Devices currently connected |
|
||||
| `netalertx_offline_devices` | Devices currently offline |
|
||||
| `netalertx_down_devices` | Down/unreachable devices |
|
||||
| `netalertx_new_devices` | Recently detected devices |
|
||||
| `netalertx_archived_devices` | Archived devices |
|
||||
| `netalertx_favorite_devices` | User-marked favorites |
|
||||
| `netalertx_my_devices` | Devices associated with the current user |
|
||||
|
||||
---
|
||||
|
||||
### 2. Per-Device Status
|
||||
|
||||
Metric: `netalertx_device_status`
|
||||
Each device has labels:
|
||||
|
||||
* `device`: friendly name
|
||||
* `mac`: MAC address (or placeholder)
|
||||
* `ip`: last recorded IP
|
||||
* `vendor`: manufacturer or "None"
|
||||
* `first_connection`: timestamp of first detection
|
||||
* `last_connection`: most recent contact
|
||||
* `dev_type`: device type/category
|
||||
* `device_status`: current status (`Online`, `Offline`, `Archived`, `Down`, …)
|
||||
|
||||
Metric value is always `1` (presence indicator).
|
||||
|
||||
---
|
||||
|
||||
## Querying with `curl`
|
||||
|
||||
```sh
|
||||
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Accept: text/plain'
|
||||
```
|
||||
|
||||
Replace placeholders:
|
||||
|
||||
* `<server_ip>` – NetAlertX host IP/hostname
|
||||
* `<GRAPHQL_PORT>` – configured port (default `20212`)
|
||||
* `<API_TOKEN>` – your API token
|
||||
|
||||
---
|
||||
|
||||
## Prometheus Scraping Configuration
|
||||
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'netalertx'
|
||||
metrics_path: /metrics
|
||||
scheme: http
|
||||
scrape_interval: 60s
|
||||
static_configs:
|
||||
- targets: ['<server_ip>:<GRAPHQL_PORT>']
|
||||
authorization:
|
||||
type: Bearer
|
||||
credentials: <API_TOKEN>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Grafana Dashboard Template
|
||||
|
||||
Sample template JSON: [Download](./samples/API/Grafana_Dashboard.json)
|
||||
243
docs/API_NETTOOLS.md
Executable file
243
docs/API_NETTOOLS.md
Executable file
@@ -0,0 +1,243 @@
|
||||
# Net Tools API Endpoints
|
||||
|
||||
The Net Tools API provides **network diagnostic utilities**, including Wake-on-LAN, traceroute, speed testing, DNS resolution, nmap scanning, and internet connection information.
|
||||
|
||||
All endpoints require **authorization** via Bearer token.
|
||||
|
||||
---
|
||||
|
||||
## Endpoints
|
||||
|
||||
### 1. Wake-on-LAN
|
||||
|
||||
* **POST** `/nettools/wakeonlan`
|
||||
Sends a Wake-on-LAN packet to wake a device.
|
||||
|
||||
**Request Body** (JSON):
|
||||
|
||||
```json
|
||||
{
|
||||
"devMac": "AA:BB:CC:DD:EE:FF"
|
||||
}
|
||||
```
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "WOL packet sent",
|
||||
"output": "Sent magic packet to AA:BB:CC:DD:EE:FF"
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Invalid MAC address → HTTP 400
|
||||
* Command failure → HTTP 500
|
||||
|
||||
---
|
||||
|
||||
### 2. Traceroute
|
||||
|
||||
* **POST** `/nettools/traceroute`
|
||||
Performs a traceroute to a specified IP address.
|
||||
|
||||
**Request Body**:
|
||||
|
||||
```json
|
||||
{
|
||||
"devLastIP": "192.168.1.1"
|
||||
}
|
||||
```
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"output": "traceroute output as string"
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Invalid IP → HTTP 400
|
||||
* Traceroute command failure → HTTP 500
|
||||
|
||||
---
|
||||
|
||||
### 3. Speedtest
|
||||
|
||||
* **GET** `/nettools/speedtest`
|
||||
Runs an internet speed test using `speedtest-cli`.
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"output": [
|
||||
"Ping: 15 ms",
|
||||
"Download: 120.5 Mbit/s",
|
||||
"Upload: 22.4 Mbit/s"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Command failure → HTTP 500
|
||||
|
||||
---
|
||||
|
||||
### 4. DNS Lookup (nslookup)
|
||||
|
||||
* **POST** `/nettools/nslookup`
|
||||
Resolves an IP address or hostname using `nslookup`.
|
||||
|
||||
**Request Body**:
|
||||
|
||||
```json
|
||||
{
|
||||
"devLastIP": "8.8.8.8"
|
||||
}
|
||||
```
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"output": [
|
||||
"Server: 8.8.8.8",
|
||||
"Address: 8.8.8.8#53",
|
||||
"Name: google-public-dns-a.google.com"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Missing or invalid `devLastIP` → HTTP 400
|
||||
* Command failure → HTTP 500
|
||||
|
||||
---
|
||||
|
||||
### 5. Nmap Scan
|
||||
|
||||
* **POST** `/nettools/nmap`
|
||||
Runs an nmap scan on a target IP address or range.
|
||||
|
||||
**Request Body**:
|
||||
|
||||
```json
|
||||
{
|
||||
"scan": "192.168.1.0/24",
|
||||
"mode": "fast"
|
||||
}
|
||||
```
|
||||
|
||||
**Supported Modes**:
|
||||
|
||||
| Mode | nmap Arguments |
|
||||
| --------------- | -------------- |
|
||||
| `fast` | `-F` |
|
||||
| `normal` | default |
|
||||
| `detail` | `-A` |
|
||||
| `skipdiscovery` | `-Pn` |
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"mode": "fast",
|
||||
"ip": "192.168.1.0/24",
|
||||
"output": [
|
||||
"Starting Nmap 7.91",
|
||||
"Host 192.168.1.1 is up",
|
||||
"... scan results ..."
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Invalid IP → HTTP 400
|
||||
* Invalid mode → HTTP 400
|
||||
* Command failure → HTTP 500
|
||||
|
||||
---
|
||||
|
||||
### 6. Internet Connection Info
|
||||
|
||||
* **GET** `/nettools/internetinfo`
|
||||
Fetches public internet connection information using `ipinfo.io`.
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"output": "IP: 203.0.113.5 City: Sydney Country: AU Org: Example ISP"
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Failed request or empty response → HTTP 500
|
||||
|
||||
---
|
||||
|
||||
## Example `curl` Requests
|
||||
|
||||
**Wake-on-LAN**:
|
||||
|
||||
```sh
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/wakeonlan" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"devMac":"AA:BB:CC:DD:EE:FF"}'
|
||||
```
|
||||
|
||||
**Traceroute**:
|
||||
|
||||
```sh
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/traceroute" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"devLastIP":"192.168.1.1"}'
|
||||
```
|
||||
|
||||
**Speedtest**:
|
||||
|
||||
```sh
|
||||
curl "http://<server_ip>:<GRAPHQL_PORT>/nettools/speedtest" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
|
||||
**Nslookup**:
|
||||
|
||||
```sh
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/nslookup" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"devLastIP":"8.8.8.8"}'
|
||||
```
|
||||
|
||||
**Nmap Scan**:
|
||||
|
||||
```sh
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/nettools/nmap" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"scan":"192.168.1.0/24","mode":"fast"}'
|
||||
```
|
||||
|
||||
**Internet Info**:
|
||||
|
||||
```sh
|
||||
curl "http://<server_ip>:<GRAPHQL_PORT>/nettools/internetinfo" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
370
docs/API_OLD.md
Executable file
370
docs/API_OLD.md
Executable file
@@ -0,0 +1,370 @@
|
||||
# [Deprecated] API endpoints
|
||||
|
||||
> [!WARNING]
|
||||
> Some of these endpoints will be deprecated soon. Please refere to the new [API](API.md) endpoints docs for details on the new API layer.
|
||||
|
||||
NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the `API_TOKEN` settings as authorization bearer, for example:
|
||||
|
||||
```graphql
|
||||
curl 'http://host:GRAPHQL_PORT/graphql' \
|
||||
-X POST \
|
||||
-H 'Authorization: Bearer API_TOKEN' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
|
||||
"variables": {
|
||||
"options": {
|
||||
"page": 1,
|
||||
"limit": 10,
|
||||
"sort": [{ "field": "devName", "order": "asc" }],
|
||||
"search": "",
|
||||
"status": "connected"
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
## API Endpoint: GraphQL
|
||||
|
||||
- Endpoint URL: `php/server/query_graphql.php`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20212` or as defined by the `GRAPHQL_PORT` setting
|
||||
|
||||
### Example Query to Fetch Devices
|
||||
|
||||
First, let's define the GraphQL query to fetch devices with pagination and sorting options.
|
||||
|
||||
```graphql
|
||||
query GetDevices($options: PageQueryOptionsInput) {
|
||||
devices(options: $options) {
|
||||
devices {
|
||||
rowid
|
||||
devMac
|
||||
devName
|
||||
devOwner
|
||||
devType
|
||||
devVendor
|
||||
devLastConnection
|
||||
devStatus
|
||||
}
|
||||
count
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See also: [Debugging GraphQL issues](./DEBUG_API_SERVER.md)
|
||||
|
||||
### `curl` Command
|
||||
|
||||
You can use the following `curl` command to execute the query.
|
||||
|
||||
```sh
|
||||
curl 'http://host:GRAPHQL_PORT/graphql' -X POST -H 'Authorization: Bearer API_TOKEN' -H 'Content-Type: application/json' --data '{
|
||||
"query": "query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }",
|
||||
"variables": {
|
||||
"options": {
|
||||
"page": 1,
|
||||
"limit": 10,
|
||||
"sort": [{ "field": "devName", "order": "asc" }],
|
||||
"search": "",
|
||||
"status": "connected"
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### Explanation:
|
||||
|
||||
1. **GraphQL Query**:
|
||||
- The `query` parameter contains the GraphQL query as a string.
|
||||
- The `variables` parameter contains the input variables for the query.
|
||||
|
||||
2. **Query Variables**:
|
||||
- `page`: Specifies the page number of results to fetch.
|
||||
- `limit`: Specifies the number of results per page.
|
||||
- `sort`: Specifies the sorting options, with `field` being the field to sort by and `order` being the sort order (`asc` for ascending or `desc` for descending).
|
||||
- `search`: A search term to filter the devices.
|
||||
- `status`: The status filter to apply (valid values are `my_devices` (determined by the `UI_MY_DEVICES` setting), `connected`, `favorites`, `new`, `down`, `archived`, `offline`).
|
||||
|
||||
3. **`curl` Command**:
|
||||
- The `-X POST` option specifies that we are making a POST request.
|
||||
- The `-H "Content-Type: application/json"` option sets the content type of the request to JSON.
|
||||
- The `-d` option provides the request payload, which includes the GraphQL query and variables.
|
||||
|
||||
### Sample Response
|
||||
|
||||
The response will be in JSON format, similar to the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"devices": {
|
||||
"devices": [
|
||||
{
|
||||
"rowid": 1,
|
||||
"devMac": "00:11:22:33:44:55",
|
||||
"devName": "Device 1",
|
||||
"devOwner": "Owner 1",
|
||||
"devType": "Type 1",
|
||||
"devVendor": "Vendor 1",
|
||||
"devLastConnection": "2025-01-01T00:00:00Z",
|
||||
"devStatus": "connected"
|
||||
},
|
||||
{
|
||||
"rowid": 2,
|
||||
"devMac": "66:77:88:99:AA:BB",
|
||||
"devName": "Device 2",
|
||||
"devOwner": "Owner 2",
|
||||
"devType": "Type 2",
|
||||
"devVendor": "Vendor 2",
|
||||
"devLastConnection": "2025-01-02T00:00:00Z",
|
||||
"devStatus": "connected"
|
||||
}
|
||||
],
|
||||
"count": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## API Endpoint: JSON files
|
||||
|
||||
This API endpoint retrieves static files, that are periodically updated.
|
||||
|
||||
- Endpoint URL: `php/server/query_json.php?file=<file name>`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
|
||||
|
||||
### When are the endpoints updated
|
||||
|
||||
The endpoints are updated when objects in the API endpoints are changed.
|
||||
|
||||
### Location of the endpoints
|
||||
|
||||
In the container, these files are located under the API directory (default: `/tmp/api/`, configurable via `NETALERTX_API` environment variable). You can access them via the `/php/server/query_json.php?file=user_notifications.json` endpoint.
|
||||
|
||||
### Available endpoints
|
||||
|
||||
You can access the following files:
|
||||
|
||||
| File name | Description |
|
||||
|----------------------|----------------------|
|
||||
| `notification_json_final.json` | The json version of the last notification (e.g. used for webhooks - [sample JSON](https://github.com/jokob-sk/NetAlertX/blob/main/front/report_templates/webhook_json_sample.json)). |
|
||||
| `table_devices.json` | All of the available Devices detected by the app. |
|
||||
| `table_plugins_events.json` | The list of the unprocessed (pending) notification events (plugins_events DB table). |
|
||||
| `table_plugins_history.json` | The list of notification events history. |
|
||||
| `table_plugins_objects.json` | The content of the plugins_objects table. Find more info on the [Plugin system here](https://github.com/jokob-sk/NetAlertX/tree/main/docs/PLUGINS.md)|
|
||||
| `language_strings.json` | The content of the language_strings table, which in turn is loaded from the plugins `config.json` definitions. |
|
||||
| `table_custom_endpoint.json` | A custom endpoint generated by the SQL query specified by the `API_CUSTOM_SQL` setting. |
|
||||
| `table_settings.json` | The content of the settings table. |
|
||||
| `app_state.json` | Contains the current application state. |
|
||||
|
||||
|
||||
### JSON Data format
|
||||
|
||||
The endpoints starting with the `table_` prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:
|
||||
|
||||
```JSON
|
||||
{
|
||||
"data": [
|
||||
{
|
||||
"db_column_name": "data",
|
||||
"db_column_name2": "data2"
|
||||
},
|
||||
{
|
||||
"db_column_name": "data3",
|
||||
"db_column_name2": "data4"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Example JSON of the `table_devices.json` endpoint with two Devices (database rows):
|
||||
|
||||
```JSON
|
||||
{
|
||||
"data": [
|
||||
{
|
||||
"devMac": "Internet",
|
||||
"devName": "Net - Huawei",
|
||||
"devType": "Router",
|
||||
"devVendor": null,
|
||||
"devGroup": "Always on",
|
||||
"devFirstConnection": "2021-01-01 00:00:00",
|
||||
"devLastConnection": "2021-01-28 22:22:11",
|
||||
"devLastIP": "192.168.1.24",
|
||||
"devStaticIP": 0,
|
||||
"devPresentLastScan": 1,
|
||||
"devLastNotification": "2023-01-28 22:22:28.998715",
|
||||
"devIsNew": 0,
|
||||
"devParentMAC": "",
|
||||
"devParentPort": "",
|
||||
"devIcon": "globe"
|
||||
},
|
||||
{
|
||||
"devMac": "a4:8f:ff:aa:ba:1f",
|
||||
"devName": "Net - USG",
|
||||
"devType": "Firewall",
|
||||
"devVendor": "Ubiquiti Inc",
|
||||
"devGroup": "",
|
||||
"devFirstConnection": "2021-02-12 22:05:00",
|
||||
"devLastConnection": "2021-07-17 15:40:00",
|
||||
"devLastIP": "192.168.1.1",
|
||||
"devStaticIP": 1,
|
||||
"devPresentLastScan": 1,
|
||||
"devLastNotification": "2021-07-17 15:40:10.667717",
|
||||
"devIsNew": 0,
|
||||
"devParentMAC": "Internet",
|
||||
"devParentPort": 1,
|
||||
"devIcon": "shield-halved"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## API Endpoint: Prometheus Exporter
|
||||
|
||||
* **Endpoint URL**: `/metrics`
|
||||
* **Host**: (where NetAlertX exporter is running)
|
||||
* **Port**: as configured in the `GRAPHQL_PORT` setting (`20212` by default)
|
||||
|
||||
---
|
||||
|
||||
### Example Output of the `/metrics` Endpoint
|
||||
|
||||
Below is a representative snippet of the metrics you may find when querying the `/metrics` endpoint for `netalertx`. It includes both aggregate counters and `device_status` labels per device.
|
||||
|
||||
```
|
||||
netalertx_connected_devices 31
|
||||
netalertx_offline_devices 54
|
||||
netalertx_down_devices 0
|
||||
netalertx_new_devices 0
|
||||
netalertx_archived_devices 31
|
||||
netalertx_favorite_devices 2
|
||||
netalertx_my_devices 54
|
||||
|
||||
netalertx_device_status{device="Net - Huawei", mac="Internet", ip="1111.111.111.111", vendor="None", first_connection="2021-01-01 00:00:00", last_connection="2025-08-04 17:57:00", dev_type="Router", device_status="Online"} 1
|
||||
netalertx_device_status{device="Net - USG", mac="74:ac:74:ac:74:ac", ip="192.168.1.1", vendor="Ubiquiti Networks Inc.", first_connection="2022-02-12 22:05:00", last_connection="2025-06-07 08:16:49", dev_type="Firewall", device_status="Archived"} 1
|
||||
netalertx_device_status{device="Raspberry Pi 4 LAN", mac="74:ac:74:ac:74:74", ip="192.168.1.9", vendor="Raspberry Pi Trading Ltd", first_connection="2022-02-12 22:05:00", last_connection="2025-08-04 17:57:00", dev_type="Singleboard Computer (SBC)", device_status="Online"} 1
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Metrics Explanation
|
||||
|
||||
#### 1. Aggregate Device Counts
|
||||
|
||||
Metric names prefixed with `netalertx_` provide aggregated counts by device status:
|
||||
|
||||
* `netalertx_connected_devices`: number of devices currently connected
|
||||
* `netalertx_offline_devices`: devices currently offline
|
||||
* `netalertx_down_devices`: down/unreachable devices
|
||||
* `netalertx_new_devices`: devices recently detected
|
||||
* `netalertx_archived_devices`: archived devices
|
||||
* `netalertx_favorite_devices`: user-marked favorite devices
|
||||
* `netalertx_my_devices`: devices associated with the current user context
|
||||
|
||||
These numeric values give a high-level overview of device distribution.
|
||||
|
||||
#### 2. Per‑Device Status with Labels
|
||||
|
||||
Each individual device is represented by a `netalertx_device_status` metric, with descriptive labels:
|
||||
|
||||
* `device`: friendly name of the device
|
||||
* `mac`: MAC address (or placeholder)
|
||||
* `ip`: last recorded IP address
|
||||
* `vendor`: manufacturer or "None" if unknown
|
||||
* `first_connection`: timestamp when the device was first observed
|
||||
* `last_connection`: most recent contact timestamp
|
||||
* `dev_type`: device category or type
|
||||
* `device_status`: current status (Online / Offline / Archived / Down / ...)
|
||||
|
||||
The metric value is always `1` (indicating presence or active state) and the combination of labels identifies the device.
|
||||
|
||||
---
|
||||
|
||||
### How to Query with `curl`
|
||||
|
||||
To fetch the metrics from the NetAlertX exporter:
|
||||
|
||||
```sh
|
||||
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Accept: text/plain'
|
||||
```
|
||||
|
||||
Replace:
|
||||
|
||||
* `<server_ip>`: IP or hostname of the NetAlertX server
|
||||
* `<GRAPHQL_PORT>`: port specified in your `GRAPHQL_PORT` setting (default: `20212`)
|
||||
* `<API_TOKEN>` your Bearer token from the `API_TOKEN` setting
|
||||
|
||||
---
|
||||
|
||||
### Summary
|
||||
|
||||
* **Endpoint**: `/metrics` provides both summary counters and per-device status entries.
|
||||
* **Aggregate metrics** help monitor overall device states.
|
||||
* **Detailed metrics** expose each device’s metadata via labels.
|
||||
* **Use case**: feed into Prometheus for scraping, monitoring, alerting, or charting dashboard views.
|
||||
|
||||
### Prometheus Scraping Configuration
|
||||
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'netalertx'
|
||||
metrics_path: /metrics
|
||||
scheme: http
|
||||
scrape_interval: 60s
|
||||
static_configs:
|
||||
- targets: ['<server_ip>:<GRAPHQL_PORT>']
|
||||
authorization:
|
||||
type: Bearer
|
||||
credentials: <API_TOKEN>
|
||||
```
|
||||
|
||||
### Grafana template
|
||||
|
||||
Grafana template sample: [Download json](./samples/API/Grafana_Dashboard.json)
|
||||
|
||||
## API Endpoint: /log files
|
||||
|
||||
This API endpoint retrieves files from the `/tmp/log` folder.
|
||||
|
||||
- Endpoint URL: `php/server/query_logs.php?file=<file name>`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
|
||||
|
||||
| File | Description |
|
||||
|--------------------------|---------------------------------------------------------------|
|
||||
| `IP_changes.log` | Logs of IP address changes |
|
||||
| `app.log` | Main application log |
|
||||
| `app.php_errors.log` | PHP error log |
|
||||
| `app_front.log` | Frontend application log |
|
||||
| `app_nmap.log` | Logs of Nmap scan results |
|
||||
| `db_is_locked.log` | Logs when the database is locked |
|
||||
| `execution_queue.log` | Logs of execution queue activities |
|
||||
| `plugins/` | Directory for temporary plugin-related files (not accessible) |
|
||||
| `report_output.html` | HTML report output |
|
||||
| `report_output.json` | JSON format report output |
|
||||
| `report_output.txt` | Text format report output |
|
||||
| `stderr.log` | Logs of standard error output |
|
||||
| `stdout.log` | Logs of standard output |
|
||||
|
||||
|
||||
## API Endpoint: /config files
|
||||
|
||||
To retrieve files from the `/data/config` folder.
|
||||
|
||||
- Endpoint URL: `php/server/query_config.php?file=<file name>`
|
||||
- Host: `same as front end (web ui)`
|
||||
- Port: `20211` or as defined by the $PORT docker environment variable (same as the port for the web ui)
|
||||
|
||||
| File | Description |
|
||||
|--------------------------|--------------------------------------------------|
|
||||
| `devices.csv` | Devices csv file |
|
||||
| `app.conf` | Application config file |
|
||||
|
||||
32
docs/API_ONLINEHISTORY.md
Executable file
32
docs/API_ONLINEHISTORY.md
Executable file
@@ -0,0 +1,32 @@
|
||||
# Online History API Endpoints
|
||||
|
||||
Manage the **online history records** of devices. Currently, the API supports deletion of all history entries. All endpoints require **authorization**.
|
||||
|
||||
---
|
||||
|
||||
## 1. Delete Online History
|
||||
|
||||
* **DELETE** `/history`
|
||||
Remove **all records** from the online history table (`Online_History`). This operation **cannot be undone**.
|
||||
|
||||
**Response** (success):
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Deleted online history"
|
||||
}
|
||||
```
|
||||
|
||||
**Error Responses**:
|
||||
|
||||
* Unauthorized → HTTP 403
|
||||
|
||||
---
|
||||
|
||||
### Example `curl` Request
|
||||
|
||||
```bash
|
||||
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/history" \
|
||||
-H "Authorization: Bearer <API_TOKEN>"
|
||||
```
|
||||
240
docs/API_SESSIONS.md
Executable file
240
docs/API_SESSIONS.md
Executable file
@@ -0,0 +1,240 @@
|
||||
# Sessions API Endpoints
|
||||
|
||||
Track and manage device connection sessions. Sessions record when a device connects or disconnects on the network.
|
||||
|
||||
### Create a Session
|
||||
|
||||
* **POST** `/sessions/create` → Create a new session for a device
|
||||
|
||||
**Request Body:**
|
||||
|
||||
```json
|
||||
{
|
||||
"mac": "AA:BB:CC:DD:EE:FF",
|
||||
"ip": "192.168.1.10",
|
||||
"start_time": "2025-08-01T10:00:00",
|
||||
"end_time": "2025-08-01T12:00:00", // optional
|
||||
"event_type_conn": "Connected", // optional, default "Connected"
|
||||
"event_type_disc": "Disconnected" // optional, default "Disconnected"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Session created for MAC AA:BB:CC:DD:EE:FF"
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/sessions/create" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"mac": "AA:BB:CC:DD:EE:FF",
|
||||
"ip": "192.168.1.10",
|
||||
"start_time": "2025-08-01T10:00:00",
|
||||
"end_time": "2025-08-01T12:00:00",
|
||||
"event_type_conn": "Connected",
|
||||
"event_type_disc": "Disconnected"
|
||||
}'
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Delete Sessions
|
||||
|
||||
* **DELETE** `/sessions/delete` → Delete all sessions for a given MAC
|
||||
|
||||
**Request Body:**
|
||||
|
||||
```json
|
||||
{
|
||||
"mac": "AA:BB:CC:DD:EE:FF"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "Deleted sessions for MAC AA:BB:CC:DD:EE:FF"
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/sessions/delete" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"mac": "AA:BB:CC:DD:EE:FF"
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### List Sessions
|
||||
|
||||
* **GET** `/sessions/list` → Retrieve sessions optionally filtered by device and date range
|
||||
|
||||
**Query Parameters:**
|
||||
|
||||
* `mac` (optional) → Filter by device MAC address
|
||||
* `start_date` (optional) → Filter sessions starting from this date (`YYYY-MM-DD`)
|
||||
* `end_date` (optional) → Filter sessions ending by this date (`YYYY-MM-DD`)
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"sessions": [
|
||||
{
|
||||
"ses_MAC": "AA:BB:CC:DD:EE:FF",
|
||||
"ses_Connection": "2025-08-01 10:00",
|
||||
"ses_Disconnection": "2025-08-01 12:00",
|
||||
"ses_Duration": "2h 0m",
|
||||
"ses_IP": "192.168.1.10",
|
||||
"ses_Info": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
---
|
||||
|
||||
### Calendar View of Sessions
|
||||
|
||||
* **GET** `/sessions/calendar` → View sessions in calendar format
|
||||
|
||||
**Query Parameters:**
|
||||
|
||||
* `start` → Start date (`YYYY-MM-DD`)
|
||||
* `end` → End date (`YYYY-MM-DD`)
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
/sessions/calendar?start=2025-08-01&end=2025-08-21
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"sessions": [
|
||||
{
|
||||
"resourceId": "AA:BB:CC:DD:EE:FF",
|
||||
"title": "",
|
||||
"start": "2025-08-01T10:00:00",
|
||||
"end": "2025-08-01T12:00:00",
|
||||
"color": "#00a659",
|
||||
"tooltip": "Connection: 2025-08-01 10:00\nDisconnection: 2025-08-01 12:00\nIP: 192.168.1.10",
|
||||
"className": "no-border"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/calendar?start=2025-08-01&end=2025-08-21" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Device Sessions
|
||||
|
||||
* **GET** `/sessions/<mac>` → Retrieve sessions for a specific device
|
||||
|
||||
**Query Parameters:**
|
||||
|
||||
* `period` → Period to retrieve sessions (`1 day`, `7 days`, `1 month`, etc.)
|
||||
Default: `1 day`
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
/sessions/AA:BB:CC:DD:EE:FF?period=7 days
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"sessions": [
|
||||
{
|
||||
"ses_MAC": "AA:BB:CC:DD:EE:FF",
|
||||
"ses_Connection": "2025-08-01 10:00",
|
||||
"ses_Disconnection": "2025-08-01 12:00",
|
||||
"ses_Duration": "2h 0m",
|
||||
"ses_IP": "192.168.1.10",
|
||||
"ses_Info": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/AA:BB:CC:DD:EE:FF?period=7%20days" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Session Events Summary
|
||||
|
||||
* **GET** `/sessions/session-events` → Retrieve a summary of session events
|
||||
|
||||
**Query Parameters:**
|
||||
|
||||
* `type` → Event type (`all`, `sessions`, `missing`, `voided`, `new`, `down`)
|
||||
Default: `all`
|
||||
* `period` → Period to retrieve events (`7 days`, `1 month`, etc.)
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
/sessions/session-events?type=all&period=7 days
|
||||
```
|
||||
|
||||
**Response:**
|
||||
Returns a list of events or sessions with formatted connection, disconnection, duration, and IP information.
|
||||
|
||||
#### `curl` Example
|
||||
|
||||
```bash
|
||||
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/sessions/session-events?type=all&period=7%20days" \
|
||||
-H "Authorization: Bearer <API_TOKEN>" \
|
||||
-H "Accept: application/json"
|
||||
```
|
||||
92
docs/API_SETTINGS.md
Executable file
92
docs/API_SETTINGS.md
Executable file
@@ -0,0 +1,92 @@
|
||||
# Settings API Endpoints
|
||||
|
||||
Retrieve application settings stored in the configuration system. This endpoint is useful for quickly fetching individual settings such as `API_TOKEN` or `TIMEZONE`.
|
||||
|
||||
For bulk or structured access (all settings, schema details, or filtering), use the [GraphQL API Endpoint](API_GRAPHQL.md).
|
||||
|
||||
---
|
||||
|
||||
### Get a Setting
|
||||
|
||||
* **GET** `/settings/<key>` → Retrieve the value of a specific setting
|
||||
|
||||
**Path Parameter:**
|
||||
|
||||
* `key` → The setting key to retrieve (e.g., `API_TOKEN`, `TIMEZONE`)
|
||||
|
||||
**Authorization:**
|
||||
Requires a valid API token in the `Authorization` header.
|
||||
|
||||
---
|
||||
|
||||
#### `curl` Example (Success)
|
||||
|
||||
```sh
|
||||
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Accept: application/json'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"value": "my-secret-token"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### `curl` Example (Invalid Key)
|
||||
|
||||
```sh
|
||||
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/DOES_NOT_EXIST' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Accept: application/json'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"value": null
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### `curl` Example (Unauthorized)
|
||||
|
||||
```sh
|
||||
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \
|
||||
-H 'Accept: application/json'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "Forbidden"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Notes
|
||||
|
||||
* This endpoint is optimized for **direct retrieval of a single setting**.
|
||||
* For **complex retrieval scenarios** (listing all settings, retrieving schema metadata like `setName`, `setDescription`, `setType`, or checking if a setting is overridden by environment variables), use the **GraphQL Settings Query**:
|
||||
|
||||
```sh
|
||||
curl 'http://<server_ip>:<GRAPHQL_PORT>/graphql' \
|
||||
-X POST \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"query": "query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }"
|
||||
}'
|
||||
```
|
||||
|
||||
See the [GraphQL API Endpoint](API_GRAPHQL.md) for more details.
|
||||
125
docs/API_SYNC.md
Executable file
125
docs/API_SYNC.md
Executable file
@@ -0,0 +1,125 @@
|
||||
# Sync API Endpoint
|
||||
|
||||
---
|
||||
|
||||
The `/sync` endpoint is used by the **SYNC plugin** to synchronize data between multiple NetAlertX instances (e.g., from a node to a hub). It supports both **GET** and **POST** requests.
|
||||
|
||||
#### 9.1 GET `/sync`
|
||||
|
||||
Fetches data from a node to the hub. The data is returned as a **base64-encoded JSON file**.
|
||||
|
||||
**Example Request:**
|
||||
|
||||
```sh
|
||||
curl 'http://<server>:<GRAPHQL_PORT>/sync' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>'
|
||||
```
|
||||
|
||||
**Response Example:**
|
||||
|
||||
```json
|
||||
{
|
||||
"node_name": "NODE-01",
|
||||
"status": 200,
|
||||
"message": "OK",
|
||||
"data_base64": "eyJkZXZpY2VzIjogW3siZGV2TWFjIjogIjAwOjExOjIyOjMzOjQ0OjU1IiwiZGV2TmFtZSI6ICJEZXZpY2UgMSJ9XSwgImNvdW50Ijog1fQ==",
|
||||
"timestamp": "2025-08-24T10:15:00+10:00"
|
||||
}
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
|
||||
* `data_base64` contains the full JSON data encoded in Base64.
|
||||
* `node_name` corresponds to the `SYNC_node_name` setting on the node.
|
||||
* Errors (e.g., missing file) return HTTP 500 with an error message.
|
||||
|
||||
---
|
||||
|
||||
#### 9.2 POST `/sync`
|
||||
|
||||
The **POST** endpoint is used by nodes to **send data to the hub**. The hub expects the data as **form-encoded fields** (application/x-www-form-urlencoded or multipart/form-data). The hub then stores the data in the plugin log folder for processing.
|
||||
|
||||
#### Required Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
| ----------- | ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `data` | string | The payload from the plugin or devices. Typically **plain text**, **JSON**, or **encrypted Base64** data. In your Python script, `encrypt_data()` is applied before sending. |
|
||||
| `node_name` | string | The name of the node sending the data. Matches the node’s `SYNC_node_name` setting. Used to generate the filename on the hub. |
|
||||
| `plugin` | string | The name of the plugin sending the data. Determines the filename prefix (`last_result.<plugin>...`). |
|
||||
| `file_path` | string (optional) | Path of the local file being sent. Used only for logging/debugging purposes on the hub; **not required for processing**. |
|
||||
|
||||
---
|
||||
|
||||
### How the Hub Processes the POST Data
|
||||
|
||||
1. **Receives the data** and validates the API token.
|
||||
2. **Stores the raw payload** in:
|
||||
|
||||
```
|
||||
INSTALL_PATH/log/plugins/last_result.<plugin>.encoded.<node_name>.<sequence>.log
|
||||
```
|
||||
|
||||
* `<plugin>` → plugin name from the POST request.
|
||||
* `<node_name>` → node name from the POST request.
|
||||
* `<sequence>` → incremented number for each submission.
|
||||
|
||||
3. **Decodes / decrypts the data** if necessary (Base64 or encrypted) before processing.
|
||||
4. **Processes JSON payloads** (e.g., device info) to:
|
||||
|
||||
* Avoid duplicates by tracking `devMac`.
|
||||
* Add metadata like `devSyncHubNode`.
|
||||
* Insert new devices into the database.
|
||||
5. **Renames files** to indicate they have been processed:
|
||||
|
||||
```
|
||||
processed_last_result.<plugin>.<node_name>.<sequence>.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Example POST Payload
|
||||
|
||||
If a node is sending device data:
|
||||
|
||||
```bash
|
||||
curl -X POST 'http://<hub>:<PORT>/sync' \
|
||||
-H 'Authorization: Bearer <API_TOKEN>' \
|
||||
-F 'data={"data":[{"devMac":"00:11:22:33:44:55","devName":"Device 1","devVendor":"Vendor A","devLastIP":"192.168.1.10"}]}' \
|
||||
-F 'node_name=NODE-01' \
|
||||
-F 'plugin=SYNC'
|
||||
```
|
||||
|
||||
* The `data` field contains JSON with a **`data` array**, where each element is a **device object** or **plugin data object**.
|
||||
* The `plugin` and `node_name` fields allow the hub to **organize and store the file correctly**.
|
||||
* The data is only processed if the relevant plugins are enabled and run on the target server.
|
||||
|
||||
---
|
||||
|
||||
### Key Notes
|
||||
|
||||
* **Always use the same `plugin` and `node_name` values** for consistent storage.
|
||||
* **Encrypted data**: The Python script uses `encrypt_data()` before sending, and the hub decodes it before processing.
|
||||
* **Sequence numbers**: Every submission generates a new sequence, preventing overwriting previous data.
|
||||
* **Form-encoded**: The hub expects `multipart/form-data` (cURL `-F`) or `application/x-www-form-urlencoded`.
|
||||
|
||||
**Storage Details:**
|
||||
|
||||
* Data is stored under `INSTALL_PATH/log/plugins` with filenames following the pattern:
|
||||
|
||||
```
|
||||
last_result.<plugin>.encoded.<node_name>.<sequence>.log
|
||||
```
|
||||
|
||||
* Both encoded and decoded files are tracked, and new submissions increment the sequence number.
|
||||
* If storing fails, the API returns HTTP 500 with an error message.
|
||||
* The data is only processed if the relevant plugins are enabled and run on the target server.
|
||||
|
||||
---
|
||||
|
||||
#### 9.3 Notes and Best Practices
|
||||
|
||||
* **Authorization Required** – Both GET and POST require a valid API token.
|
||||
* **Data Integrity** – Ensure that `node_name` and `plugin` are consistent to avoid overwriting files.
|
||||
* **Monitoring** – Notifications are generated whenever data is sent or received (`write_notification`), which can be used for alerting or auditing.
|
||||
* **Use Case** – Typically used in multi-node deployments to consolidate device and event data on a central hub.
|
||||
|
||||
12
docs/API_TESTS.md
Executable file
12
docs/API_TESTS.md
Executable file
@@ -0,0 +1,12 @@
|
||||
### Unit Tests
|
||||
|
||||
>[!WARNING]
|
||||
> Please note these test modify data in the database.
|
||||
|
||||
1. See the `/test` directory for available test cases. These are not exhaustive but cover the main API endpoints.
|
||||
2. To run a test case, SSH into the container:
|
||||
`sudo docker exec -it netalertx /bin/bash`
|
||||
3. Inside the container, install pytest (if not already installed):
|
||||
`pip install pytest`
|
||||
4. Run a specific test case:
|
||||
`pytest /app/test/TESTFILE.py`
|
||||
210
docs/BACKUPS.md
210
docs/BACKUPS.md
@@ -1,90 +1,162 @@
|
||||
# Backing things up
|
||||
# Backing Things Up
|
||||
|
||||
> [!NOTE]
|
||||
> To backup 99% of your configuration backup at least the `/app/config` folder. Please read the whole page (or at least "Scenario 2: Corrupted database") for details.
|
||||
> Note that database definitions might change over time. The safest way is to restore your older backups into the **same version** of the app they were taken from and then gradually upgarde between releases to the latest version.
|
||||
> To back up 99% of your configuration, back up at least the `/data/config` folder.
|
||||
> Database definitions can change between releases, so the safest method is to restore backups using the **same app version** they were taken from, then upgrade incrementally.
|
||||
|
||||
There are 4 artifacts that can be used to backup the application:
|
||||
---
|
||||
|
||||
| File | Description | Limitations |
|
||||
|-----------------------|-------------------------------|-------------------------------|
|
||||
| `/db/app.db` | Database file(s) | The database file might be in an uncommitted state or corrupted |
|
||||
| `/config/app.conf` | Configuration file | Can be overridden with the [`APP_CONF_OVERRIDE` env variable](https://github.com/jokob-sk/NetAlertX/tree/main/dockerfiles#docker-environment-variables). |
|
||||
| `/config/devices.csv` | CSV file containing device information | Doesn't contain historical data |
|
||||
| `/config/workflows.json` | A JSON file containing your workflows | N/A |
|
||||
## What to Back Up
|
||||
|
||||
There are four key artifacts you can use to back up your NetAlertX configuration:
|
||||
|
||||
## Backup strategies
|
||||
| File | Description | Limitations |
|
||||
| ------------------------ | ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `/db/app.db` | The application database | Might be in an uncommitted state or corrupted |
|
||||
| `/config/app.conf` | Configuration file | Can be overridden using the [`APP_CONF_OVERRIDE`](https://github.com/jokob-sk/NetAlertX/tree/main/dockerfiles#docker-environment-variables) variable |
|
||||
| `/config/devices.csv` | CSV file containing device data | Does not include historical data |
|
||||
| `/config/workflows.json` | JSON file containing your workflows | N/A |
|
||||
|
||||
The safest approach to backups is to backup everything, by taking regular file system backups of the `/db` and `/config` folders (I use [Kopia](https://github.com/kopia/kopia)).
|
||||
---
|
||||
|
||||
Arguably, the most time is spent setting up the device list, so if only one file is kept I'd recommend to have a latest backup of the `devices_<timestamp>.csv` or `devices.csv` file, followed by the `app.conf` and `workflows.json` files. You can also download `app.conf` and `devices.csv` file in the Maintenance section:
|
||||
## Where the Data Lives
|
||||
|
||||

|
||||
|
||||
### Scenario 1: Full backup
|
||||
|
||||
End-result: Full restore
|
||||
|
||||
#### 💾 Source artifacts:
|
||||
|
||||
- `/app/db/app.db` (uncorrupted)
|
||||
- `/app/config/app.conf`
|
||||
- `/app/config/workflows.json`
|
||||
|
||||
#### 📥 Recovery:
|
||||
|
||||
To restore the application map the above files as described in the [Setup documentation](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md#docker-paths).
|
||||
|
||||
|
||||
### Scenario 2: Corrupted database
|
||||
|
||||
End-result: Partial restore (historical data and some plugin data will be missing)
|
||||
|
||||
#### 💾 Source artifacts:
|
||||
|
||||
- `/app/config/app.conf`
|
||||
- `/app/config/devices_<timestamp>.csv` or `/app/config/devices.csv`
|
||||
- `/app/config/workflows.json`
|
||||
|
||||
#### 📥 Recovery:
|
||||
|
||||
Even with a corrupted database you can recover what I would argue is 99% of the configuration.
|
||||
|
||||
- upload the `app.conf` and `workflows.json` files into the mounted `/app/config/` folder as described in the [Setup documentation](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md#docker-paths).
|
||||
- rename the `devices_<timestamp>.csv` to `devices.csv` and place it in the `/app/config` folder
|
||||
- Restore the `devices.csv` backup via the [Maintenance section](./DEVICES_BULK_EDITING.md)
|
||||
|
||||
## Data and backup storage
|
||||
|
||||
To decide on a backup strategy, check where the data is stored:
|
||||
Understanding where your data is stored helps you plan your backup strategy.
|
||||
|
||||
### Core Configuration
|
||||
|
||||
The core application configuration is in the `app.conf` file (See [Settings System](./SETTINGS_SYSTEM.md) for details), such as:
|
||||
Stored in `/data/config/app.conf`.
|
||||
This includes settings for:
|
||||
|
||||
- Notification settings
|
||||
- Scanner settings
|
||||
- Scheduled maintenance settings
|
||||
- UI configuration
|
||||
* Notifications
|
||||
* Scanning
|
||||
* Scheduled maintenance
|
||||
* UI preferences
|
||||
|
||||
### Core Device Data
|
||||
(See [Settings System](./SETTINGS_SYSTEM.md) for details.)
|
||||
|
||||
The core device data is backed up to the `devices_<timestamp>.csv` or `devices.csv` file via the [CSV Backup `CSVBCKP` Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/csv_backup). This file contains data, such as:
|
||||
### Device Data
|
||||
|
||||
- Device names
|
||||
- Device icons
|
||||
- Device network configuration
|
||||
- Device categorization
|
||||
- Device custom properties data
|
||||
Stored in `/data/config/devices_<timestamp>.csv` or `/data/config/devices.csv`, created by the [CSV Backup `CSVBCKP` Plugin](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/csv_backup).
|
||||
Contains:
|
||||
|
||||
### Historical data
|
||||
* Device names, icons, and categories
|
||||
* Network configuration
|
||||
* Custom properties
|
||||
|
||||
Historical data is stored in the `app.db` database (See [Database overview](./DATABASE.md) for details). This data includes:
|
||||
### Historical Data
|
||||
|
||||
- Plugin objects
|
||||
- Plugin historical entries
|
||||
- History of Events, Notifications, Workflow Events
|
||||
- Presence history
|
||||
Stored in `/data/db/app.db` (see [Database Overview](./DATABASE.md)).
|
||||
Contains:
|
||||
|
||||
* Plugin data and historical entries
|
||||
* Event and notification history
|
||||
* Device presence history
|
||||
|
||||
---
|
||||
|
||||
## Backup Strategies
|
||||
|
||||
The safest approach is to back up **both** the `/db` and `/config` folders regularly. Tools like [Kopia](https://github.com/kopia/kopia) make this simple and efficient.
|
||||
|
||||
If you can only keep a few files, prioritize:
|
||||
|
||||
1. The latest `devices_<timestamp>.csv` or `devices.csv`
|
||||
2. `app.conf`
|
||||
3. `workflows.json`
|
||||
|
||||
You can also download the `app.conf` and `devices.csv` files from the **Maintenance** section:
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Scenario 1: Full Backup and Restore
|
||||
|
||||
**Goal:** Full recovery of your configuration and data.
|
||||
|
||||
### 💾 What to Back Up
|
||||
|
||||
* `/data/db/app.db` (uncorrupted)
|
||||
* `/data/config/app.conf`
|
||||
* `/data/config/workflows.json`
|
||||
|
||||
### 📥 How to Restore
|
||||
|
||||
Map these files into your container as described in the [Setup documentation](./DOCKER_INSTALLATION.md).
|
||||
|
||||
---
|
||||
|
||||
## Scenario 2: Corrupted Database
|
||||
|
||||
**Goal:** Recover configuration and device data when the database is lost or corrupted.
|
||||
|
||||
### 💾 What to Back Up
|
||||
|
||||
* `/data/config/app.conf`
|
||||
* `/data/config/workflows.json`
|
||||
* `/data/config/devices_<timestamp>.csv` (rename to `devices.csv` during restore)
|
||||
|
||||
### 📥 How to Restore
|
||||
|
||||
1. Copy `app.conf` and `workflows.json` into `/data/config/`
|
||||
2. Rename and place `devices_<timestamp>.csv` → `/data/config/devices.csv`
|
||||
3. Restore via the **Maintenance** section under *Devices → Bulk Editing*
|
||||
|
||||
This recovers nearly all configuration, workflows, and device metadata.
|
||||
|
||||
---
|
||||
|
||||
## Docker-Based Backup and Restore
|
||||
|
||||
For users running NetAlertX via Docker, you can back up or restore directly from your host system — a convenient and scriptable option.
|
||||
|
||||
### Full Backup (File-Level)
|
||||
|
||||
1. **Stop the container:**
|
||||
|
||||
```bash
|
||||
docker stop netalertx
|
||||
```
|
||||
|
||||
2. **Create a compressed archive** of your configuration and database volumes:
|
||||
|
||||
```bash
|
||||
docker run --rm -v local_path/config:/config -v local_path/db:/db alpine tar -cz /config /db > netalertx-backup.tar.gz
|
||||
```
|
||||
|
||||
3. **Restart the container:**
|
||||
|
||||
```bash
|
||||
docker start netalertx
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
1. **Stop the container:**
|
||||
|
||||
```bash
|
||||
docker stop netalertx
|
||||
```
|
||||
|
||||
2. **Restore from your backup file:**
|
||||
|
||||
```bash
|
||||
docker run --rm -i -v local_path/config:/config -v local_path/db:/db alpine tar -C / -xz < netalertx-backup.tar.gz
|
||||
```
|
||||
|
||||
3. **Restart the container:**
|
||||
|
||||
```bash
|
||||
docker start netalertx
|
||||
```
|
||||
|
||||
> This approach uses a temporary, minimal `alpine` container to access Docker-managed volumes. The `tar` command creates or extracts an archive directly from your host’s filesystem, making it fast, clean, and reliable for both automation and manual recovery.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
* Back up `/data/config` for configuration and devices; `/data/db` for history
|
||||
* Keep regular backups, especially before upgrades
|
||||
* For Docker setups, use the lightweight `alpine`-based backup method for consistency and portability
|
||||
|
||||
82
docs/BUILDS.md
Normal file
82
docs/BUILDS.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# NetAlertX Builds: Choose Your Path
|
||||
|
||||
NetAlertX provides different installation methods for different needs. This guide helps you choose the right path for security, experimentation, or development.
|
||||
|
||||
## 1. Hardened Appliance (Default Production)
|
||||
|
||||
> [!NOTE]
|
||||
> Use this image if: You want to use NetAlertX securely.
|
||||
|
||||
### Who is this for?
|
||||
|
||||
All users who want a stable, secure, "set-it-and-forget-it" appliance.
|
||||
|
||||
### Methodology
|
||||
|
||||
- Multi-stage Alpine build
|
||||
- Aggressively "amputated"
|
||||
- Locked down for max security
|
||||
|
||||
### Source
|
||||
|
||||
`Dockerfile (hardened target)`
|
||||
|
||||
## 2. "Tinkerer's" Image (Insecure VM-Style)
|
||||
|
||||
> [!NOTE]
|
||||
> Use this image if: You want to experiment with NetAlertX.
|
||||
|
||||
### Who is this for?
|
||||
|
||||
Power users, developers, and "tinkerers" wanting a familiar "VM-like" experience.
|
||||
|
||||
### Methodology
|
||||
|
||||
- Traditional Debian build
|
||||
- Includes full un-hardened OS
|
||||
- Contains `apt`, `sudo`, `git`
|
||||
|
||||
### Source
|
||||
|
||||
`Dockerfile.debian`
|
||||
|
||||
## 3. Contributor's Devcontainer (Project Developers)
|
||||
|
||||
> [!NOTE]
|
||||
> Use this image if: You want to develop NetAlertX itself.
|
||||
|
||||
### Who is this for?
|
||||
|
||||
Project contributors who are actively writing and debugging code for NetAlertX.
|
||||
|
||||
### Methodology
|
||||
|
||||
|
||||
- Builds `FROM runner` stage
|
||||
- Loaded by VS Code
|
||||
- Full debug tools: `xdebug`, `pytest`
|
||||
|
||||
|
||||
### Source
|
||||
|
||||
`Dockerfile (devcontainer target)`
|
||||
|
||||
# Visualizing the Trade-Offs
|
||||
|
||||
This chart compares the three builds across key attributes. A higher score means "more of" that attribute. Notice the clear trade-offs between security and development features.
|
||||
|
||||

|
||||
|
||||
|
||||
# Build Process & Origins
|
||||
|
||||
The final images originate from two different files and build paths. The main `Dockerfile` uses stages to create *both* the hardened and development container images.
|
||||
|
||||
## Official Build Path
|
||||
|
||||
Dockerfile -> builder (Stage 1) -> runner (Stage 2) -> hardened (Final Stage) (Production Image) + devcontainer (Final Stage) (Developer Image)
|
||||
|
||||
|
||||
## Legacy Build Path
|
||||
|
||||
Dockerfile.debian -> "Tinkerer's" Image (Insecure VM-Style Image)
|
||||
@@ -1,57 +1,114 @@
|
||||
### Loading...
|
||||
# Troubleshooting Common Issues
|
||||
|
||||
Often if the application is misconfigured the `Loading...` dialog is continuously displayed. This is most likely caused by the backed failing to start. The **Maintenance -> Logs** section should give you more details on what's happening. If there is no exception, check the Portainer log, or start the container in the foreground (without the `-d` parameter) to observe any exceptions. It's advisable to enable `trace` or `debug`. Check the [Debug tips](./DEBUG_TIPS.md) on detailed instructions.
|
||||
> [!TIP]
|
||||
> Before troubleshooting, ensure you have set the correct [Debugging and LOG_LEVEL](./DEBUG_TIPS.md).
|
||||
|
||||
### Incorrect SCAN_SUBNETS
|
||||
---
|
||||
|
||||
One of the most common issues is not configuring `SCAN_SUBNETS` correctly. If this setting is misconfigured you will only see one or two devices in your devices list after a scan. Please read the [subnets docs](./SUBNETS.md) carefully to resolve this.
|
||||
## Docker Container Doesn't Start
|
||||
|
||||
### Duplicate devices and notifications
|
||||
|
||||
The app uses the MAC address as an unique identifier for devices. If a new MAC is detected a new device is added to the application and corresponding notifications are triggered. This means that if the MAC of an existing device changes, the device will be logged as a new device. You can usually prevent this from happening by changing the device configuration (in Android, iOS, or Windows) for your network. See the [Random Macs](./RANDOM_MAC.md) guide for details.
|
||||
Initial setup issues are often caused by **missing permissions** or **incorrectly mapped volumes**. Always double-check your `docker run` or `docker-compose.yml` against the [official setup guide](./DOCKER_INSTALLATION.md) before proceeding.
|
||||
|
||||
### Permissions
|
||||
|
||||
Make sure you [File permissions](./FILE_PERMISSIONS.md) are set correctly.
|
||||
Make sure your [file permissions](./FILE_PERMISSIONS.md) are correctly set:
|
||||
|
||||
* If facing issues (AJAX errors, can't write to DB, empty screen, etc,) make sure permissions are set correctly, and check the logs under `/app/log`.
|
||||
* To solve permission issues you can try setting the owner and group of the `app.db` by executing the following on the host system: `docker exec netalertx chown -R www-data:www-data /app/db/app.db`.
|
||||
* If still facing issues, try to map the app.db file (⚠ not folder) to `:/app/db/app.db` (see [docker-compose Examples](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md#-docker-composeyml-examples) for details)
|
||||
* If you encounter AJAX errors, cannot write to the database, or see an empty screen, check that permissions are correct and review the logs under `/tmp/log`.
|
||||
* To fix permission issues with the database, update the owner and group of `app.db` as described in the [File Permissions guide](./FILE_PERMISSIONS.md).
|
||||
|
||||
### Container restarts / crashes
|
||||
### Container Restarts / Crashes
|
||||
|
||||
* Check the logs for details. Often a required setting for a notification method is missing.
|
||||
* Check the logs for details. Often, required settings are missing.
|
||||
* For more detailed troubleshooting, see [Debug and Troubleshooting Tips](./DEBUG_TIPS.md).
|
||||
* To observe errors directly, run the container in the foreground instead of `-d`:
|
||||
|
||||
### unable to resolve host
|
||||
```bash
|
||||
docker run --rm -it <your_image>
|
||||
```
|
||||
|
||||
* Check that your `SCAN_SUBNETS` variable is using the correct mask and `--interface`. See the [subnets docs for details](./SUBNETS.md).
|
||||
---
|
||||
|
||||
### Invalid JSON
|
||||
## Docker Container Starts, But the Application Misbehaves
|
||||
|
||||
Check the [Invalid JSON errors debug help](./DEBUG_INVALID_JSON.md) docs on how to proceed.
|
||||
If the container starts but the app shows unexpected behavior, the cause is often **data corruption**, **incorrect configuration**, or **unexpected input data**.
|
||||
|
||||
### sudo execution failing (e.g.: on arpscan) on a Raspberry Pi 4
|
||||
### Continuous "Loading..." Screen
|
||||
|
||||
> sudo: unexpected child termination condition: 0
|
||||
A misconfigured application may display a persistent `Loading...` dialog. This is usually caused by the backend failing to start.
|
||||
|
||||
Resolution based on [this issue](https://github.com/linuxserver/docker-papermerge/issues/4#issuecomment-1003657581)
|
||||
**Steps to troubleshoot:**
|
||||
|
||||
1. Check **Maintenance → Logs** for exceptions.
|
||||
2. If no exception is visible, check the Portainer logs.
|
||||
3. Start the container in the foreground to observe exceptions.
|
||||
4. Enable `trace` or `debug` logging for detailed output (see [Debug Tips](./DEBUG_TIPS.md)).
|
||||
5. Verify that `GRAPHQL_PORT` is correctly configured.
|
||||
6. Check browser logs (press `F12`):
|
||||
|
||||
* **Console tab** → refresh the page
|
||||
* **Network tab** → refresh the page
|
||||
|
||||
If you are unsure how to resolve errors, provide screenshots or log excerpts in your issue report or Discord discussion.
|
||||
|
||||
---
|
||||
|
||||
### Common Configuration Issues
|
||||
|
||||
#### Incorrect `SCAN_SUBNETS`
|
||||
|
||||
If `SCAN_SUBNETS` is misconfigured, you may see only a few devices in your device list after a scan. See the [Subnets Documentation](./SUBNETS.md) for proper configuration.
|
||||
|
||||
#### Duplicate Devices and Notifications
|
||||
|
||||
* Devices are identified by their **MAC address**.
|
||||
* If a device's MAC changes, it will be treated as a new device, triggering notifications.
|
||||
* Prevent this by adjusting your device configuration for Android, iOS, or Windows. See the [Random MACs Guide](./RANDOM_MAC.md).
|
||||
|
||||
#### Unable to Resolve Host
|
||||
|
||||
* Ensure `SCAN_SUBNETS` uses the correct mask and `--interface`.
|
||||
* Refer to the [Subnets Documentation](./SUBNETS.md) for detailed guidance.
|
||||
|
||||
#### Invalid JSON Errors
|
||||
|
||||
* Follow the steps in [Invalid JSON Errors Debug Help](./DEBUG_INVALID_JSON.md).
|
||||
|
||||
#### Sudo Execution Fails (e.g., on arpscan on Raspberry Pi 4)
|
||||
|
||||
Error:
|
||||
|
||||
```
|
||||
sudo: unexpected child termination condition: 0
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
|
||||
```bash
|
||||
wget ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.3-2_armhf.deb
|
||||
sudo dpkg -i libseccomp2_2.5.3-2_armhf.deb
|
||||
```
|
||||
|
||||
The link above will probably break in time too. Go to https://packages.debian.org/sid/armhf/libseccomp2/download to find the new version number and put that in the url.
|
||||
> ⚠️ The link may break over time. Check [Debian Packages](https://packages.debian.org/sid/armhf/libseccomp2/download) for the latest version.
|
||||
|
||||
### Only Router and own device show up
|
||||
#### Only Router and Own Device Show Up
|
||||
|
||||
Make sure that the subnet and interface in `SCAN_SUBNETS` are correct. If your device/NAS has multiple ethernet ports, you probably need to change `eth0` to something else.
|
||||
* Verify the subnet and interface in `SCAN_SUBNETS`.
|
||||
* On devices with multiple Ethernet ports, you may need to change `eth0` to the correct interface.
|
||||
|
||||
### Losing my settings and devices after an update
|
||||
#### Losing Settings or Devices After Update
|
||||
|
||||
If you lose your devices and/or settings after an update that means you don't have the `/app/db` and `/app/config` folders mapped to a permanent storage. That means every time you update these folders are re-created. Make sure you have the [volumes specified correctly](./DOCKER_COMPOSE.md) in your `docker-compose.yml` or run command.
|
||||
* Ensure `/data/db` and `/data/config` are mapped to persistent storage.
|
||||
* Without persistent volumes, these folders are recreated on every update.
|
||||
* See [Docker Volumes Setup](./DOCKER_COMPOSE.md) for proper configuration.
|
||||
|
||||
#### Application Performance Issues
|
||||
|
||||
### The application is slow
|
||||
Slowness can be caused by:
|
||||
|
||||
* Incorrect settings (causing app restarts) → check `app.log`.
|
||||
* Too many background processes → disable unnecessary scanners.
|
||||
* Long scans → limit the number of scanned devices.
|
||||
* Excessive disk operations or failing maintenance plugins.
|
||||
|
||||
> See [Performance Tips](./PERFORMANCE.md) for detailed optimization steps.
|
||||
|
||||
Slowness is usually caused by incorrect settings (the app might restart, so check the `app.log`), too many background processes (disable unnecessary scanners), too long scans (limit the number of scanned devices), too many disk operations, or some maintenance plugins might have failed. See the [Performance tips](./PERFORMANCE.md) docs for details.
|
||||
21
docs/DEBUG_GRAPHQL.md → docs/DEBUG_API_SERVER.md
Executable file → Normal file
21
docs/DEBUG_GRAPHQL.md → docs/DEBUG_API_SERVER.md
Executable file → Normal file
@@ -1,35 +1,34 @@
|
||||
# Debugging GraphQL server issues
|
||||
|
||||
The GraphQL server is an API middle layer, running on it's own port specified by `GRAPHQL_PORT`, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the [API documentation](./API.md) for details.
|
||||
The GraphQL server is an API middle layer, running on it's own port specified by `GRAPHQL_PORT`, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the [API documentation](./API.md) for details.
|
||||
|
||||
The most common issue is that the GraphQL server doesn't start properly, usually due to a **port conflict**. If you are running multiple NetAlertX instances, make sure to use **unique ports** by changing the `GRAPHQL_PORT` setting. The default is `20212`.
|
||||
|
||||
## How to update the `GRAPHQL_PORT` in case of issues
|
||||
|
||||
As a first troubleshooting step try changing the default `GRAPHQL_PORT` setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.
|
||||
As a first troubleshooting step try changing the default `GRAPHQL_PORT` setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.
|
||||
|
||||
### Updating the setting via the Settings UI
|
||||
|
||||
Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:
|
||||
|
||||

|
||||

|
||||
|
||||
You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The `API_TOKEN` is used to authenticate any API calls, including GraphQL requests.
|
||||
You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The `API_TOKEN` is used to authenticate any API calls, including GraphQL requests.
|
||||
|
||||
### Updating the `app.conf` file
|
||||
|
||||
If the UI is not accessible, you can directly edit the `app.conf` file in your `/config` folder:
|
||||
|
||||

|
||||

|
||||
|
||||
### Using a docker variable
|
||||
|
||||
All application settings can also be initialized via the `APP_CONF_OVERRIDE` docker env variable.
|
||||
All application settings can also be initialized via the `APP_CONF_OVERRIDE` docker env variable.
|
||||
|
||||
```yaml
|
||||
...
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20213
|
||||
- APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"}
|
||||
...
|
||||
@@ -43,22 +42,22 @@ There are several ways to check if the GraphQL server is running.
|
||||
|
||||
You can navigate to Maintenance -> Init Check to see if `isGraphQLServerRunning` is ticked:
|
||||
|
||||

|
||||

|
||||
|
||||
### Checking the Logs
|
||||
|
||||
You can navigate to Maintenance -> Logs and search for `graphql` to see if it started correctly and serving requests:
|
||||
|
||||

|
||||

|
||||
|
||||
### Inspecting the Browser console
|
||||
|
||||
In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).
|
||||
|
||||

|
||||

|
||||
|
||||
You can then inspect any of the POST requests by opening them in a new tab.
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
Check the the HTTP response of the failing backend call by following these steps:
|
||||
|
||||
- Open developer console in your browser (usually, e. g. for Chrome, key F12 on the keyboard).
|
||||
- Follow the steps in this screenshot:
|
||||
- Follow the steps in this screenshot:
|
||||
|
||||
![F12DeveloperConsole][F12DeveloperConsole]
|
||||
|
||||
- Copy the URL causing the error and enter it in the address bar of your browser directly and hit enter. The copied URLs could look something like this (notice the query strings at the end):
|
||||
- `http://<NetAlertX URL>:20211/api/table_devices.json?nocache=1704141103121`
|
||||
- `http://<NetAlertX URL>:20211/php/server/devices.php?action=getDevicesTotals`
|
||||
- `http://<server>:20211/api/table_devices.json?nocache=1704141103121`
|
||||
- `http://<server>:20211/php/server/devices.php?action=getDevicesTotals`
|
||||
|
||||
|
||||
- Post the error response in the existing issue thread on GitHub or create a new issue and include the redacted response of the failing query.
|
||||
|
||||
34
docs/DEBUG_PHP.md
Executable file
34
docs/DEBUG_PHP.md
Executable file
@@ -0,0 +1,34 @@
|
||||
# Debugging backend PHP issues
|
||||
|
||||
## Logs in UI
|
||||
|
||||

|
||||
|
||||
You can view recent backend PHP errors directly in the **Maintenance > Logs** section of the UI. This provides quick access to logs without needing terminal access.
|
||||
|
||||
## Accessing logs directly
|
||||
|
||||
Sometimes, the UI might not be accessible. In that case, you can access the logs directly inside the container.
|
||||
|
||||
### Step-by-step:
|
||||
|
||||
1. **Open a shell into the container:**
|
||||
|
||||
```bash
|
||||
docker exec -it netalertx /bin/sh
|
||||
```
|
||||
|
||||
2. **Check the NGINX error log:**
|
||||
|
||||
```bash
|
||||
cat /var/log/nginx/error.log
|
||||
```
|
||||
|
||||
3. **Check the PHP application error log:**
|
||||
|
||||
```bash
|
||||
cat /tmp/log/app.php_errors.log
|
||||
```
|
||||
|
||||
These logs will help identify syntax issues, fatal errors, or startup problems when the UI fails to load properly.
|
||||
|
||||
@@ -1,5 +1,8 @@
|
||||
# Troubleshooting plugins
|
||||
|
||||
> [!TIP]
|
||||
> Before troubleshooting, please ensure you have the right [Debugging and LOG_LEVEL set](./DEBUG_TIPS.md).
|
||||
|
||||
## High-level overview
|
||||
|
||||
If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the `last_result.log` file in the plugin log folder (`app/log/plugins/`).
|
||||
@@ -9,7 +12,7 @@ For a more in-depth overview on how plugins work check the [Plugins development
|
||||
### Prerequisites
|
||||
|
||||
- Make sure you read and followed the specific plugin setup instructions.
|
||||
- Ensure you have [debug enabled (see More Logging)](./DEBUG_TIPS.md)
|
||||
- Ensure you have [debug enabled (see More Logging)](./DEBUG_TIPS.md)
|
||||
|
||||
### Potential issues
|
||||
|
||||
@@ -47,9 +50,9 @@ Input data from the plugin might cause mapping issues in specific edge cases. Lo
|
||||
17:31:05 [Plugins] history_to_insert count: 4
|
||||
17:31:05 [Plugins] objects_to_insert count: 0
|
||||
17:31:05 [Plugins] objects_to_update count: 4
|
||||
17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status "watched-not-changed"
|
||||
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "missing-in-last-scan"
|
||||
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "watched-not-changed"
|
||||
17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status "watched-not-changed"
|
||||
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "missing-in-last-scan"
|
||||
17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status "watched-not-changed"
|
||||
17:31:05 [Plugins] Mapping objects to database table: CurrentScan
|
||||
17:31:05 [Plugins] SQL query for mapping: INSERT into CurrentScan ( "cur_MAC", "cur_IP", "cur_LastQuery", "cur_Name", "cur_Vendor", "cur_ScanMethod") VALUES ( ?, ?, ?, ?, ?, ?)
|
||||
17:31:05 [Plugins] SQL sqlParams for mapping: [('01:01:01:01:01:01', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'PIHOLE'), ('02:42:ac:1e:00:02', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'PIHOLE')]
|
||||
@@ -80,7 +83,7 @@ These values, if formatted correctly, will also show up in the UI:
|
||||
|
||||
### Sharing application state
|
||||
|
||||
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
|
||||
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
|
||||
|
||||
1. Please set `LOG_LEVEL` to `trace` (Disable it once you have the info as this produces big log files).
|
||||
2. Wait for the issue to occur.
|
||||
|
||||
@@ -1,30 +1,36 @@
|
||||
# Debugging and troubleshooting
|
||||
|
||||
Please follow tips 1 - 4 to get a more detailed error.
|
||||
Please follow tips 1 - 4 to get a more detailed error.
|
||||
|
||||
## 1. More Logging
|
||||
## 1. More Logging
|
||||
|
||||
When debugging an issue always set the highest log level:
|
||||
|
||||
`LOG_LEVEL='trace'`
|
||||
|
||||
## 2. Surfacing errors when container restarts
|
||||
## 2. Surfacing errors when container restarts
|
||||
|
||||
Start the container via the **terminal** with a command similar to this one:
|
||||
|
||||
```bash
|
||||
docker run --rm --network=host \
|
||||
-v local/path/netalertx/config:/app/config \
|
||||
-v local/path/netalertx/db:/app/db \
|
||||
-e TZ=Europe/Berlin \
|
||||
docker run \
|
||||
--network=host \
|
||||
--restart unless-stopped \
|
||||
-v /local_data_dir:/data \
|
||||
-v /etc/localtime:/etc/localtime:ro \
|
||||
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
|
||||
-e PORT=20211 \
|
||||
-e APP_CONF_OVERRIDE='{"GRAPHQL_PORT":"20214"}' \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
|
||||
```
|
||||
|
||||
> ⚠ Please note, don't use the `-d` parameter so you see the error when the container crashes. Use this error in your issue description.
|
||||
Note: Your `/local_data_dir` should contain a `config` and `db` folder.
|
||||
|
||||
## 3. Check the _dev image and open issues
|
||||
> [!NOTE]
|
||||
> ⚠ The most important part is NOT to use the `-d` parameter so you see the error when the container crashes. Use this error in your issue description.
|
||||
|
||||
## 3. Check the _dev image and open issues
|
||||
|
||||
If possible, check if your issue got fixed in the `_dev` image before opening a new issue. The container is:
|
||||
|
||||
@@ -34,7 +40,7 @@ If possible, check if your issue got fixed in the `_dev` image before opening a
|
||||
|
||||
Please also search [open issues](https://github.com/jokob-sk/NetAlertX/issues).
|
||||
|
||||
## 4. Disable restart behavior
|
||||
## 4. Disable restart behavior
|
||||
|
||||
To prevent a Docker container from automatically restarting in a Docker Compose file, specify the restart policy as `no`:
|
||||
|
||||
@@ -48,9 +54,14 @@ services:
|
||||
# Other service configurations...
|
||||
```
|
||||
|
||||
## 5. Sharing application state
|
||||
## 5. TMP mount directories to rule host out permission issues
|
||||
|
||||
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
|
||||
Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server. See teh [Permissions guide](./FILE_PERMISSIONS.md) for details.
|
||||
|
||||
|
||||
## 6. Sharing application state
|
||||
|
||||
Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.
|
||||
|
||||
1. Please set `LOG_LEVEL` to `trace` (Disable it once you have the info as this produces big log files).
|
||||
2. Wait for the issue to occur.
|
||||
@@ -61,4 +72,4 @@ Sometimes specific log sections are needed to debug issues. The Devices and Curr
|
||||
|
||||
## Common issues
|
||||
|
||||
See [Common issues](./COMMON_ISSUES.md) for details.
|
||||
See [Common issues](./COMMON_ISSUES.md) for additional troubleshooting tips.
|
||||
|
||||
@@ -4,8 +4,8 @@ NetAlertX allows you to mass-edit devices via a CSV export and import feature, o
|
||||
|
||||
## UI multi edit
|
||||
|
||||
> [!NOTE]
|
||||
> Make sure you have your backups saved and restorable before doing any mass edits. Check [Backup strategies](./BACKUPS.md).
|
||||
> [!NOTE]
|
||||
> Make sure you have your backups saved and restorable before doing any mass edits. Check [Backup strategies](./BACKUPS.md).
|
||||
|
||||
You can select devices in the _Devices_ view by selecting devices to edit and then clicking the _Multi-edit_ button or via the _Maintenance_ > _Multi-Edit_ section.
|
||||
|
||||
@@ -16,23 +16,23 @@ You can select devices in the _Devices_ view by selecting devices to edit and th
|
||||
|
||||
The database and device structure may change with new releases. When using the CSV import functionality, ensure the format matches what the application expects. To avoid issues, you can first export the devices and review the column formats before importing any custom data.
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
> As always, backup everything, just in case.
|
||||
|
||||
1. In _Maintenance_ > _Backup / Restore_ click the _CSV Export_ button.
|
||||
1. In _Maintenance_ > _Backup / Restore_ click the _CSV Export_ button.
|
||||
2. A `devices.csv` is generated in the `/config` folder
|
||||
3. Edit the `devices.csv` file however you like.
|
||||
3. Edit the `devices.csv` file however you like.
|
||||
|
||||

|
||||
|
||||
> [!NOTE]
|
||||
> The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: `<your netalertx url>/php/server/devices.php?action=ExportCSV` or via the `CSV Backup` plugin. (💡 You can schedule this)
|
||||
> [!NOTE]
|
||||
> The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this by acessing this URL: `<server>:20211/php/server/devices.php?action=ExportCSV` or via the `CSV Backup` plugin. (💡 You can schedule this)
|
||||
|
||||

|
||||
|
||||
### File encoding format
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
> Keep Linux line endings (suggested editors: Nano, Notepad++)
|
||||
|
||||

|
||||
|
||||
114
docs/DEVICE_HEURISTICS.md
Executable file
114
docs/DEVICE_HEURISTICS.md
Executable file
@@ -0,0 +1,114 @@
|
||||
# Device Heuristics: Icon and Type Guessing
|
||||
|
||||
This module is responsible for inferring the most likely **device type** and **icon** based on minimal identifying data like MAC address, vendor, IP, or device name.
|
||||
|
||||
It does this using a set of heuristics defined in an external JSON rules file, which it evaluates **in priority order**.
|
||||
|
||||
>[!NOTE]
|
||||
> You can find the full source code of the heuristics module in the `device_heuristics.py` file.
|
||||
|
||||
---
|
||||
|
||||
## JSON Rule Format
|
||||
|
||||
Rules are defined in a file called `device_heuristics_rules.json` (located under `/back`), structured like:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"dev_type": "Phone",
|
||||
"icon_html": "<i class=\"fa-brands fa-apple\"></i>",
|
||||
"matching_pattern": [
|
||||
{ "mac_prefix": "001A79", "vendor": "Apple" }
|
||||
],
|
||||
"name_pattern": ["iphone", "pixel"]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
>[!NOTE]
|
||||
> Feel free to raise a PR in case you'd like to add any rules into the `device_heuristics_rules.json` file. Please place new rules into the correct position and consider the priority of already available rules.
|
||||
|
||||
### Supported fields:
|
||||
|
||||
| Field | Type | Description |
|
||||
| ------------------ | -------------------- | --------------------------------------------------------------- |
|
||||
| `dev_type` | `string` | Type to assign if rule matches (e.g. `"Gateway"`, `"Phone"`) |
|
||||
| `icon_html` | `string` | Icon (HTML string) to assign if rule matches. Encoded to base64 at load time. |
|
||||
| `matching_pattern` | `array` | List of `{ mac_prefix, vendor }` objects for first strict and then loose matching |
|
||||
| `name_pattern` | `array` *(optional)* | List of lowercase substrings (used with regex) |
|
||||
| `ip_pattern` | `array` *(optional)* | Regex patterns to match IPs |
|
||||
|
||||
**Order in this array defines priority** — rules are checked top-down and short-circuit on first match.
|
||||
|
||||
---
|
||||
|
||||
## Matching Flow (in Priority Order)
|
||||
|
||||
The function `guess_device_attributes(...)` runs a series of matching functions in strict order:
|
||||
|
||||
1. MAC + Vendor → `match_mac_and_vendor()`
|
||||
2. Vendor only → `match_vendor()`
|
||||
3. Name pattern → `match_name()`
|
||||
4. IP pattern → `match_ip()`
|
||||
5. Final fallback → defaults defined in the `NEWDEV_devIcon` and `NEWDEV_devType` settings.
|
||||
|
||||
> [!NOTE]
|
||||
> The app will try guessing the device type or icon if `devType` or `devIcon` are `""` or `"null"`.
|
||||
|
||||
### Use of default values
|
||||
|
||||
The guessing process runs for every device **as long as the current type or icon still matches the default values**. Even if earlier heuristics return a match, the system continues evaluating additional clues — like name or IP — to try and replace placeholders.
|
||||
|
||||
```python
|
||||
# Still considered a match attempt if current values are defaults
|
||||
if (not type_ or type_ == default_type) or (not icon or icon == default_icon):
|
||||
type_, icon = match_ip(ip, default_type, default_icon)
|
||||
```
|
||||
|
||||
In other words: if the type or icon is still `"unknown"` (or matches the default), the system assumes the match isn’t final — and keeps looking. It stops only when both values are non-default (defaults are defined in the `NEWDEV_devIcon` and `NEWDEV_devType` settings).
|
||||
|
||||
---
|
||||
|
||||
## Match Behavior (per function)
|
||||
|
||||
These functions are executed in the following order:
|
||||
|
||||
### `match_mac_and_vendor(mac_clean, vendor, ...)`
|
||||
|
||||
* Looks for MAC prefix **and** vendor substring match
|
||||
* Most precise
|
||||
* Stops as soon as a match is found
|
||||
|
||||
### `match_vendor(vendor, ...)`
|
||||
|
||||
* Falls back to substring match on vendor only
|
||||
* Ignores rules where `mac_prefix` is present (ensures this is really a fallback)
|
||||
|
||||
### `match_name(name, ...)`
|
||||
|
||||
* Lowercase name is compared against all `name_pattern` values using regex
|
||||
* Good for user-assigned labels (e.g. "AP Office", "iPhone")
|
||||
|
||||
### `match_ip(ip, ...)`
|
||||
|
||||
* If IP is present and matches regex patterns under any rule, it returns that type/icon
|
||||
* Usually used for gateways or local IP ranges
|
||||
|
||||
---
|
||||
|
||||
## Icons
|
||||
|
||||
* Each rule can define an `icon_html`, which is converted to a `icon_base64` on load
|
||||
* If missing, it falls back to the passed-in `default_icon` (`NEWDEV_devIcon` setting)
|
||||
* If a match is found but icon is still blank, default is used
|
||||
|
||||
**TL;DR:** Type and icon must both be matched. If only one is matched, the other falls back to the default.
|
||||
|
||||
---
|
||||
|
||||
## Priority Mechanics
|
||||
|
||||
* JSON rules are evaluated **top-to-bottom**
|
||||
* Matching is **first-hit wins** — no scoring, no weights
|
||||
* Rules that are more specific (e.g. exact MAC prefixes) should be listed earlier
|
||||
@@ -1,8 +1,8 @@
|
||||
# NetAlertX - Device Management
|
||||
# Device Management
|
||||
|
||||
The Main Info section is where most of the device identifiable information is stored and edited. Some of the information is autodetected via various plugins. Initial values for most of the fields can be specified in the `NEWDEV` plugin.
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
>
|
||||
> You can multi-edit devices by selecting them in the main Devices view, from the Mainetence section, or via the CSV Export functionality under Maintenance. More info can be found in the [Devices Bulk-editing docs](./DEVICES_BULK_EDITING.md).
|
||||
|
||||
@@ -14,23 +14,23 @@ The Main Info section is where most of the device identifiable information is st
|
||||
- **MAC**: MAC addres of the device. Not editable, unless creating a new dummy device.
|
||||
- **Last IP**: IP addres of the device. Not editable, unless creating a new dummy device.
|
||||
- **Name**: Friendly device name. Autodetected via various 🆎 Name discovery [plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). The app attaches `(IP match)` if the name is discovered via an IP match and not MAC match which could mean the name could be incorrect as IPs might change.
|
||||
- **Icon**: Partially autodetected. Select an existing or [add a custom icon](./ICONS.md). You can also auto-apply the same icon on all devices of the same type.
|
||||
- **Icon**: Partially autodetected. Select an existing or [add a custom icon](./ICONS.md). You can also auto-apply the same icon on all devices of the same type.
|
||||
- **Owner**: Device owner (The list is self-populated with existing owners and you can add custom values).
|
||||
- **Type**: Select a device type from the dropdown list (`Smartphone`, `Tablet`,
|
||||
`Laptop`, `TV`, `router`, etc.) or add a new device type. If you want the device to act as a **Network device** (and be able to be a network node in the Network view), select a type under Network Devices or add a new Network Device type in Settings. More information can be found in the [Network Setup docs](./NETWORK_TREE.md).
|
||||
`Laptop`, `TV`, `router`, etc.) or add a new device type. If you want the device to act as a **Network device** (and be able to be a network node in the Network view), select a type under Network Devices or add a new Network Device type in Settings. More information can be found in the [Network Setup docs](./NETWORK_TREE.md).
|
||||
- **Vendor**: The manufacturing vendor. Automatically updated by NetAlertX when empty or unknown, can be edited.
|
||||
- **Group**: Select a group (`Always on`, `Personal`, `Friends`, etc.) or type
|
||||
your own Group name.
|
||||
- **Location**: Select the location, usually a room, where the device is located (`Kitchen`, `Attic`, `Living room`, etc.) or add a custom Location.
|
||||
- **Location**: Select the location, usually a room, where the device is located (`Kitchen`, `Attic`, `Living room`, etc.) or add a custom Location.
|
||||
- **Comments**: Add any comments for the device, such as a serial number, or maintenance information.
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
>
|
||||
> Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.
|
||||
> Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.
|
||||
|
||||
## Dummy devices
|
||||
|
||||
You can create dummy devices from the Devices listing screen.
|
||||
You can create dummy devices from the Devices listing screen.
|
||||
|
||||

|
||||
|
||||
@@ -39,12 +39,12 @@ The **MAC** field and the **Last IP** field will then become editable.
|
||||

|
||||
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
>
|
||||
> You can couple this with the `ICMP` plugin which can be used to monitor the status of these devices, if they are actual devices reachable with the `ping` command. If not, you can use a loopback IP address so they appear online, such as `0.0.0.0` or `127.0.0.1`.
|
||||
|
||||
## Copying data from an existing device.
|
||||
## Copying data from an existing device.
|
||||
|
||||
To speed up device population you can also copy data from an existing device. This can be done from the **Tools** tab on the Device details.
|
||||
To speed up device population you can also copy data from an existing device. This can be done from the **Tools** tab on the Device details.
|
||||
|
||||
|
||||
|
||||
63
docs/DEV_DEVCONTAINER.md
Executable file
63
docs/DEV_DEVCONTAINER.md
Executable file
@@ -0,0 +1,63 @@
|
||||
# Devcontainer for NetAlertX Guide
|
||||
|
||||
This devcontainer is designed to mirror the production container environment as closely as possible, while providing a rich set of tools for development.
|
||||
|
||||
## How to Get Started
|
||||
|
||||
1. **Prerequisites:**
|
||||
* A working **Docker installation** that can be managed by your user. This can be [Docker Desktop](https://www.docker.com/products/docker-desktop/) or Docker Engine installed via other methods (like the official [get-docker script](https://get.docker.com)).
|
||||
* [Visual Studio Code](https://code.visualstudio.com/) installed.
|
||||
* The [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) installed.
|
||||
|
||||
2. **Launch the Devcontainer:**
|
||||
* Clone this repository.
|
||||
* Open the repository folder in VS Code.
|
||||
* A notification will pop up in the bottom-right corner asking to **"Reopen in Container"**. Click it.
|
||||
* VS Code will now build the Docker image and connect your editor to the container. Your terminal, debugger, and all tools will now be running inside this isolated environment.
|
||||
|
||||
## Key Workflows & Features
|
||||
|
||||
Once you're inside the container, everything is set up for you.
|
||||
|
||||
### 1. Services (Frontend & Backend)
|
||||
|
||||

|
||||
|
||||
The container's startup script (`.devcontainer/scripts/setup.sh`) automatically starts the Nginx/PHP frontend and the Python backend. You can restart them at any time using the built-in tasks.
|
||||
|
||||
### 2. Integrated Debugging (Just Press F5!)
|
||||
|
||||

|
||||
|
||||
Debugging for both the Python backend and PHP frontend is pre-configured and ready to go.
|
||||
|
||||
* **Python Backend (debugpy):** The backend automatically starts with a debugger attached on port `5678`. Simply open a Python file (e.g., `server/__main__.py`), set a breakpoint, and press **F5** (or select "Python Backend Debug: Attach") to connect the debugger.
|
||||
* **PHP Frontend (Xdebug):** Xdebug listens on port `9003`. In VS Code, start listening for Xdebug connections and use a browser extension (like "Xdebug helper") to start a debugging session for the web UI.
|
||||
|
||||
### 3. Common Tasks (F1 -> Run Task)
|
||||
|
||||

|
||||
|
||||
We've created several VS Code Tasks to simplify common operations. Access them by pressing `F1` and typing "Tasks: Run Task".
|
||||
|
||||
* `Generate Dockerfile`: **This is important.** The actual `.devcontainer/Dockerfile` is auto-generated. If you need to change the container environment, edit `.devcontainer/resources/devcontainer-Dockerfile` and then run this task.
|
||||
* `Re-Run Startup Script`: Manually re-runs the `.devcontainer/scripts/setup.sh` script to re-link files and restart services.
|
||||
* `Start Backend (Python)` / `Start Frontend (nginx and PHP-FPM)`: Manually restart the services if needed.
|
||||
|
||||
### 4. Running Tests
|
||||
|
||||

|
||||
|
||||
The environment includes `pytest`. You can run tests directly from the VS Code Test Explorer UI or by running `pytest -q` in the integrated terminal. The necessary `PYTHONPATH` is already configured so that tests can correctly import the server modules.
|
||||
|
||||
## How to Maintain This Devcontainer
|
||||
|
||||
The setup is designed to be easy to manage. Here are the core principles:
|
||||
|
||||
* **Don't Edit `Dockerfile` Directly:** The main `.devcontainer/Dockerfile` is a combination of the project's root `Dockerfile` and a special dev-only stage. To add new tools or dependencies, **edit `.devcontainer/resources/devcontainer-Dockerfile`** and then run the `Generate Dockerfile` task.
|
||||
* **Build-Time vs. Run-Time Setup:**
|
||||
* For changes that can be baked into the image (like installing a new package with `apk add`), add them to the resource Dockerfile.
|
||||
* For changes that must happen when the container *starts* (like creating symlinks, setting permissions, or starting services), use `.devcontainer/scripts/setup.sh`.
|
||||
* **Project Conventions:** The `.github/copilot-instructions.md` file is an excellent resource to help AI and humans understand the project's architecture, conventions, and how to use existing helper functions instead of hardcoding values.
|
||||
|
||||
This setup provides a powerful and consistent foundation for all current and future contributors to NetAlertX.
|
||||
@@ -1,32 +1,45 @@
|
||||
# Development environment set up
|
||||
# Development Environment Setup
|
||||
|
||||
>[!NOTE]
|
||||
> Replace `/development` with the path where your code files will be stored. The default container name is `netalertx` so there might be a conflict with your running containers.
|
||||
I truly appreciate all contributions! To help keep this project maintainable, this guide provides an overview of project priorities, key design considerations, and overall philosophy. It also includes instructions for setting up your environment so you can start contributing right away.
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
Before starting development, please scan the below development guidelines.
|
||||
Before starting development, please review the following guidelines.
|
||||
|
||||
### Priority Order (Highest to Lowest)
|
||||
|
||||
1. 🔼 Fixing core bugs that lack workarounds.
|
||||
2. 🔵 Adding core functionality that unlocks other features (e.g., plugins).
|
||||
3. 🔵 Refactoring to enable faster development.
|
||||
4. 🔽 UI improvements (PRs welcome).
|
||||
1. 🔼 Fixing core bugs that lack workarounds
|
||||
2. 🔵 Adding core functionality that unlocks other features (e.g., plugins)
|
||||
3. 🔵 Refactoring to enable faster development
|
||||
4. 🔽 UI improvements (PRs welcome, but low priority)
|
||||
|
||||
### Design Philosophy
|
||||
|
||||
Focus on core functionality and integrate with existing tools rather than reinventing the wheel.
|
||||
Examples:
|
||||
The application architecture is designed for extensibility and maintainability. It relies heavily on configuration manifests via plugins and settings to dynamically build the UI and populate the application with data from various sources.
|
||||
|
||||
- Using **Apprise** for notifications instead of implementing multiple separate gateways.
|
||||
- Implementing **regex-based validation** instead of one-off validation for each setting.
|
||||
For details, see:
|
||||
- [Plugins Development](PLUGINS_DEV.md) (includes video)
|
||||
- [Settings System](SETTINGS_SYSTEM.md)
|
||||
|
||||
> [!NOTE]
|
||||
> UI changes have lower priority, however, PRs are welcome, but **keep them small & focused**.
|
||||
Focus on **core functionality** and integrate with existing tools rather than reinventing the wheel.
|
||||
|
||||
Examples:
|
||||
- Using **Apprise** for notifications instead of implementing multiple separate gateways
|
||||
- Implementing **regex-based validation** instead of one-off validation for each setting
|
||||
|
||||
> [!NOTE]
|
||||
> UI changes have lower priority. PRs are welcome, but please keep them **small and focused**.
|
||||
|
||||
## Development Environment Set Up
|
||||
|
||||
>[!TIP]
|
||||
> There is also a ready to use [devcontainer](DEV_DEVCONTAINER.md) available.
|
||||
|
||||
The following steps will guide you to set up your environment for local development and to run a custom docker build on your system. For most changes the container doesn't need to be rebuild which speeds up the development significantly.
|
||||
|
||||
>[!NOTE]
|
||||
> Replace `/development` with the path where your code files will be stored. The default container name is `netalertx` so there might be a conflict with your running containers.
|
||||
|
||||
### 1. Download the code:
|
||||
|
||||
- `mkdir /development`
|
||||
@@ -42,7 +55,6 @@ The file content should be following, with your custom values.
|
||||
#--------------------------------
|
||||
#NETALERTX
|
||||
#--------------------------------
|
||||
TZ=Europe/Berlin
|
||||
PORT=22222 # make sure this port is unique on your whole network
|
||||
DEV_LOCATION=/development/NetAlertX
|
||||
APP_DATA_LOCATION=/volume/docker_appdata
|
||||
@@ -84,14 +96,14 @@ Most code changes can be tested without rebuilding the container. When working o
|
||||
|
||||
1. You can usually restart the backend via _Maintenance > Logs > Restart_ server
|
||||
|
||||

|
||||

|
||||
|
||||
2. If above doesn't work, SSH into the container and kill & restart the main script loop
|
||||
|
||||
- `sudo docker exec -it netalertx /bin/bash`
|
||||
- `pkill -f "python /app/server" && python /app/server & `
|
||||
|
||||
3. If none of the above work, restart the docker caontainer.
|
||||
3. If none of the above work, restart the docker container.
|
||||
|
||||
- This is usually the last resort as sometimes the Docker engine becomes unresponsive and the whole engine needs to be restarted.
|
||||
|
||||
@@ -119,3 +131,6 @@ Most code changes can be tested without rebuilding the container. When working o
|
||||
- Updating a Device
|
||||
- Plugin functionality.
|
||||
- Error log inspection.
|
||||
|
||||
> [!NOTE]
|
||||
> Always run all available tests as per the [Testing documentation](API_TESTS.md).
|
||||
|
||||
44
docs/DEV_PORTS_HOST_MODE.md
Executable file
44
docs/DEV_PORTS_HOST_MODE.md
Executable file
@@ -0,0 +1,44 @@
|
||||
# Dev Ports in Host Network Mode
|
||||
|
||||
When using `"--network=host"` in the devcontainer, VS Code's normal port forwarding model doesn't apply. All container ports are already on the host network namespace, so:
|
||||
|
||||
- Listing ports in `forwardPorts` can cause VS Code to pre-bind or reserve them (conflicts with startup scripts waiting for a free port).
|
||||
- The PORTS panel will not auto-detect services reliably, because forwarding isn't occurring.
|
||||
- Debugger ports (e.g. Xdebug `9003`, Python debugpy `5678`) can still be listed safely.
|
||||
|
||||
## Recommended Pattern
|
||||
|
||||
1. Only include debugger ports in `forwardPorts`:
|
||||
```jsonc
|
||||
"forwardPorts": [5678, 9003]
|
||||
```
|
||||
2. Do NOT list application service ports (e.g. 20211, 20212) there when in host mode.
|
||||
3. Use the helper task to enumerate current bindings:
|
||||
- Run task: `> Tasks: Run Task` → `[Dev Container] List NetAlertX Ports`
|
||||
|
||||
## Port Enumeration Script
|
||||
Script: `scripts/list-ports.sh`
|
||||
Outputs binding address, PID (if resolvable) and process name for key ports.
|
||||
|
||||
You can edit the PORTS variable inside that script to add/remove watched ports.
|
||||
|
||||
## Xdebug Notes
|
||||
Set in `99-xdebug.ini`:
|
||||
```ini
|
||||
xdebug.client_host=127.0.0.1
|
||||
xdebug.client_port=9003
|
||||
xdebug.discover_client_host=1
|
||||
```
|
||||
Ensure your IDE is listening on 9003.
|
||||
|
||||
## Troubleshooting
|
||||
| Symptom | Cause | Fix |
|
||||
|---------|-------|-----|
|
||||
| `Waiting for port 20211 to free...` repeats | VS Code pre-bound the port via `forwardPorts` | Remove the port from `forwardPorts`, rebuild, retry |
|
||||
| PHP request hangs at start | Xdebug trying to connect to unresolved host (`host.docker.internal`) | Use `127.0.0.1` or rely on discovery |
|
||||
| PORTS panel empty | Expected in host mode | Use the port enumeration task |
|
||||
|
||||
## Future Improvements
|
||||
- Optional: add a small web status endpoint summarizing runtime ports.
|
||||
- Optional: detect host mode in `setup.sh` and skip the wait loop if the PID using port is the intended process.
|
||||
|
||||
@@ -1,139 +1,232 @@
|
||||
# `docker-compose.yaml` Examples
|
||||
# NetAlertX and Docker Compose
|
||||
|
||||
> [!NOTE]
|
||||
> The container needs to run in `network_mode:"host"`. This also means that not all functionality is supported on a Windows host as Docker for Windows doesn't support this networking option.
|
||||
> [!WARNING]
|
||||
> ⚠️ **Important:** The docker-compose has recently changed. Carefully read the [Migration guide](https://jokob-sk.github.io/NetAlertX/MIGRATION/?h=migrat#12-migration-from-netalertx-v25524) for detailed instructions.
|
||||
|
||||
### Example 1
|
||||
Great care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.Good care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.
|
||||
|
||||
> [!NOTE]
|
||||
> The container needs to run in `network_mode:"host"` to access Layer 2 networking such as arp, nmap and others. Due to lack of support for this feature, Windows host is not a supported operating system.
|
||||
|
||||
## Baseline Docker Compose
|
||||
|
||||
There is one baseline for NetAlertX. That's the default security-enabled official distribution.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
# use the below line if you want to test the latest dev image
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
#use an environmental variable to set host networking mode if needed
|
||||
container_name: netalertx # The name when you docker contiainer ls
|
||||
image: ghcr.io/jokob-sk/netalertx-dev:latest
|
||||
network_mode: ${NETALERTX_NETWORK_MODE:-host} # Use host networking for ARP scanning and other services
|
||||
|
||||
read_only: true # Make the container filesystem read-only
|
||||
cap_drop: # Drop all capabilities for enhanced security
|
||||
- ALL
|
||||
cap_add: # Add only the necessary capabilities
|
||||
- NET_ADMIN # Required for ARP scanning
|
||||
- NET_RAW # Required for raw socket operations
|
||||
- NET_BIND_SERVICE # Required to bind to privileged ports (nbtscan)
|
||||
|
||||
volumes:
|
||||
- local_path/config:/app/config
|
||||
- local_path/db:/app/db
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- local_path/logs:/app/log
|
||||
# (API: OPTION 1) use for performance
|
||||
- type: tmpfs
|
||||
target: /app/api
|
||||
# (API: OPTION 2) use when debugging issues
|
||||
# - local_path/api:/app/api
|
||||
- type: volume # Persistent Docker-managed named volume for config + database
|
||||
source: netalertx_data
|
||||
target: /data # `/data/config` and `/data/db` live inside this mount
|
||||
read_only: false
|
||||
|
||||
# Example custom local folder called /home/user/netalertx_data
|
||||
# - type: bind
|
||||
# source: /home/user/netalertx_data
|
||||
# target: /data
|
||||
# read_only: false
|
||||
# ... or use the alternative format
|
||||
# - /home/user/netalertx_data:/data:rw
|
||||
|
||||
- type: bind # Bind mount for timezone consistency
|
||||
source: /etc/localtime
|
||||
target: /etc/localtime
|
||||
read_only: true
|
||||
|
||||
# Mount your DHCP server file into NetAlertX for a plugin to access
|
||||
# - path/on/host/to/dhcp.file:/resources/dhcp.file
|
||||
|
||||
# tmpfs mount consolidates writable state for a read-only container and improves performance
|
||||
# uid=20211 and gid=20211 is the netalertx user inside the container
|
||||
# mode=1700 grants rwx------ permissions to the netalertx user only
|
||||
tmpfs:
|
||||
# Comment out to retain logs between container restarts - this has a server performance impact.
|
||||
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
|
||||
# Retain logs - comment out tmpfs /tmp if you want to retain logs between container restarts
|
||||
# Please note if you remove the /tmp mount, you must create and maintain sub-folder mounts.
|
||||
# - /path/on/host/log:/tmp/log
|
||||
# - "/tmp/api:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
# - "/tmp/nginx:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
# - "/tmp/run:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0} # Listen for connections on all interfaces
|
||||
PORT: ${PORT:-20211} # Application port
|
||||
GRAPHQL_PORT: ${GRAPHQL_PORT:-20212} # GraphQL API port (passed into APP_CONF_OVERRIDE at runtime)
|
||||
# NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0} # 0=kill all services and restart if any dies. 1 keeps running dead services.
|
||||
|
||||
# Resource limits to prevent resource exhaustion
|
||||
mem_limit: 2048m # Maximum memory usage
|
||||
mem_reservation: 1024m # Soft memory limit
|
||||
cpu_shares: 512 # Relative CPU weight for CPU contention scenarios
|
||||
pids_limit: 512 # Limit the number of processes/threads to prevent fork bombs
|
||||
logging:
|
||||
driver: "json-file" # Use JSON file logging driver
|
||||
options:
|
||||
max-size: "10m" # Rotate log files after they reach 10MB
|
||||
max-file: "3" # Keep a maximum of 3 log files
|
||||
|
||||
# Always restart the container unless explicitly stopped
|
||||
restart: unless-stopped
|
||||
|
||||
volumes: # Persistent volume for configuration and database storage
|
||||
netalertx_data:
|
||||
```
|
||||
|
||||
To run the container execute: `sudo docker-compose up -d`
|
||||
Run or re-run it:
|
||||
|
||||
### Example 2
|
||||
|
||||
Example by [SeimuS](https://github.com/SeimusS).
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: NetAlertX
|
||||
hostname: NetAlertX
|
||||
privileged: true
|
||||
# use the below line if you want to test the latest dev image
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: ghcr.io/jokob-sk/netalertx:latest
|
||||
environment:
|
||||
- TZ=Europe/Bratislava
|
||||
restart: always
|
||||
volumes:
|
||||
- ./netalertx/db:/app/db
|
||||
- ./netalertx/config:/app/config
|
||||
network_mode: host
|
||||
```sh
|
||||
docker compose up --force-recreate
|
||||
```
|
||||
|
||||
To run the container execute: `sudo docker-compose up -d`
|
||||
### Customize with Environmental Variables
|
||||
|
||||
### Example 3
|
||||
You can override the default settings by passing environmental variables to the `docker compose up` command.
|
||||
|
||||
`docker-compose.yml`
|
||||
**Example using a single variable:**
|
||||
|
||||
This command runs NetAlertX on port 8080 instead of the default 20211.
|
||||
|
||||
```sh
|
||||
PORT=8080 docker compose up
|
||||
```
|
||||
|
||||
**Example using all available variables:**
|
||||
|
||||
This command demonstrates overriding all primary environmental variables: running with host networking, on port 20211, GraphQL on 20212, and listening on all IPs.
|
||||
|
||||
```sh
|
||||
NETALERTX_NETWORK_MODE=host \
|
||||
LISTEN_ADDR=0.0.0.0 \
|
||||
PORT=20211 \
|
||||
GRAPHQL_PORT=20212 \
|
||||
NETALERTX_DEBUG=0 \
|
||||
docker compose up
|
||||
```
|
||||
|
||||
## `docker-compose.yaml` Modifications
|
||||
|
||||
### Modification 1: Use a Local Folder (Bind Mount)
|
||||
|
||||
By default, the baseline compose file uses a single named volume (netalertx_data) mounted at `/data`. This single-volume layout is preferred because NetAlertX manages both configuration and the database under `/data` (for example, `/data/config` and `/data/db`) via its web UI. Using one named volume simplifies permissions and portability: Docker manages the storage and NetAlertX manages the files inside `/data`.
|
||||
|
||||
A two-volume layout that mounts `/data/config` and `/data/db` separately (for example, `netalertx_config` and `netalertx_db`) is supported for backward compatibility and some advanced workflows, but it is an abnormal/legacy layout and not recommended for new deployments.
|
||||
|
||||
However, if you prefer to have direct, file-level access to your configuration for manual editing, a "bind mount" is a simple alternative. This tells Docker to use a specific folder from your computer (the "host") inside the container.
|
||||
|
||||
**How to make the change:**
|
||||
|
||||
1. Choose a location on your computer. For example, `/local_data_dir`.
|
||||
|
||||
2. Create the subfolders: `mkdir -p /local_data_dir/config` and `mkdir -p /local_data_dir/db`.
|
||||
|
||||
3. Edit your `docker-compose.yml` and find the `volumes:` section (the one *inside* the `netalertx:` service).
|
||||
|
||||
4. Comment out (add a `#` in front) or delete the `type: volume` blocks for `netalertx_config` and `netalertx_db`.
|
||||
|
||||
5. Add new lines pointing to your local folders.
|
||||
|
||||
**Before (Using Named Volumes - Preferred):**
|
||||
|
||||
```yaml
|
||||
...
|
||||
volumes:
|
||||
- netalertx_config:/data/config:rw #short-form volume (no /path is a short volume)
|
||||
- netalertx_db:/data/db:rw
|
||||
...
|
||||
```
|
||||
|
||||
**After (Using a Local Folder / Bind Mount):**
|
||||
Make sure to replace `/local_data_dir` with your actual path. The format is `<path_on_your_computer>:<path_inside_container>:<options>`.
|
||||
|
||||
```yaml
|
||||
...
|
||||
volumes:
|
||||
# - netalertx_config:/data/config:rw
|
||||
# - netalertx_db:/data/db:rw
|
||||
- /local_data_dir/config:/data/config:rw
|
||||
- /local_data_dir/db:/data/db:rw
|
||||
...
|
||||
```
|
||||
|
||||
Now, any files created by NetAlertX in `/data/config` will appear in your `/local_data_dir/config` folder.
|
||||
|
||||
This same method works for mounting other things, like custom plugins or enterprise NGINX files, as shown in the commented-out examples in the baseline file.
|
||||
|
||||
## Example Configuration Summaries
|
||||
|
||||
Here are the essential modifications for common alternative setups.
|
||||
|
||||
### Example 2: External `.env` File for Paths
|
||||
|
||||
This method is useful for keeping your paths and other settings separate from your main compose file, making it more portable.
|
||||
|
||||
**`docker-compose.yml` changes:**
|
||||
|
||||
```yaml
|
||||
...
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
# use the below line if you want to test the latest dev image
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ${APP_CONFIG_LOCATION}/netalertx/config:/app/config
|
||||
- ${APP_DATA_LOCATION}/netalertx/db/:/app/db/
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- ${LOGS_LOCATION}:/app/log
|
||||
# (API: OPTION 1) use for performance
|
||||
- type: tmpfs
|
||||
target: /app/api
|
||||
# (API: OPTION 2) use when debugging issues
|
||||
# - local/path/api:/app/api
|
||||
environment:
|
||||
- TZ=${TZ}
|
||||
- PORT=${PORT}
|
||||
- GRAPHQL_PORT=${GRAPHQL_PORT}
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
`.env` file
|
||||
**`.env` file contents:**
|
||||
|
||||
```yaml
|
||||
#GLOBAL PATH VARIABLES
|
||||
|
||||
APP_DATA_LOCATION=/path/to/docker_appdata
|
||||
APP_CONFIG_LOCATION=/path/to/docker_config
|
||||
LOGS_LOCATION=/path/to/docker_logs
|
||||
|
||||
#ENVIRONMENT VARIABLES
|
||||
|
||||
TZ=Europe/Paris
|
||||
```sh
|
||||
PORT=20211
|
||||
|
||||
#DEVELOPMENT VARIABLES
|
||||
|
||||
DEV_LOCATION=/path/to/local/source/code
|
||||
NETALERTX_NETWORK_MODE=host
|
||||
LISTEN_ADDR=0.0.0.0
|
||||
GRAPHQL_PORT=20212
|
||||
```
|
||||
|
||||
To run the container execute: `sudo docker-compose --env-file /path/to/.env up`
|
||||
Run with: `sudo docker-compose --env-file /path/to/.env up`
|
||||
|
||||
### Example 3: Docker Swarm
|
||||
|
||||
### Example 4: Docker swarm
|
||||
This is for deploying on a Docker Swarm cluster. The key differences from the baseline are the removal of `network_mode:` from the service, and the addition of `deploy:` and `networks:` blocks at both the service and top-level.
|
||||
|
||||
Notice how the host network is defined in a swarm setup:
|
||||
Here are the *only* changes you need to make to the baseline compose file to make it Swarm-compatible.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
# Use the below line if you want to test the latest dev image
|
||||
# image: "jokobsk/netalertx-dev:latest"
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
volumes:
|
||||
- /mnt/MYSERVER/netalertx/config:/config:rw
|
||||
- /mnt/MYSERVER/netalertx/db:/netalertx/db:rw
|
||||
- /mnt/MYSERVER/netalertx/logs:/netalertx/front/log:rw
|
||||
environment:
|
||||
- TZ=Europe/London
|
||||
- PORT=20211
|
||||
...
|
||||
# network_mode: ${NETALERTX_NETWORK_MODE:-host} # <-- DELETE THIS LINE
|
||||
...
|
||||
|
||||
# 2. ADD a 'networks:' block INSIDE the service to connect to the external host network.
|
||||
networks:
|
||||
- outside
|
||||
# 3. ADD a 'deploy:' block to manage the service as a swarm replica.
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
|
||||
|
||||
# 4. ADD a new top-level 'networks:' block at the end of the file to define 'outside' as the external 'host' network.
|
||||
networks:
|
||||
outside:
|
||||
external:
|
||||
name: "host"
|
||||
|
||||
|
||||
```
|
||||
|
||||
209
dockerfiles/README.md → docs/DOCKER_INSTALLATION.md
Executable file → Normal file
209
dockerfiles/README.md → docs/DOCKER_INSTALLATION.md
Executable file → Normal file
@@ -1,103 +1,106 @@
|
||||
[](https://hub.docker.com/r/jokobsk/netalertx)
|
||||
[](https://hub.docker.com/r/jokobsk/netalertx)
|
||||
[](https://github.com/jokob-sk/NetAlertX/releases)
|
||||
[](https://discord.gg/NczTUTWyRr)
|
||||
[](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
|
||||
|
||||
# NetAlertX - Network scanner & notification framework
|
||||
|
||||
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|
||||
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
|
||||
|
||||
<a href="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" target="_blank">
|
||||
<img src="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" width="1000px" />
|
||||
</a>
|
||||
|
||||
Head to [https://netalertx.com/](https://netalertx.com/) for more gifs and screenshots 📷.
|
||||
|
||||
> [!NOTE]
|
||||
> There is also an experimental 🧪 [bare-metal install](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md) method available.
|
||||
|
||||
## 📕 Basic Usage
|
||||
|
||||
> [!WARNING]
|
||||
> You will have to run the container on the `host` network and specify `SCAN_SUBNETS` unless you use other [plugin scanners](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.
|
||||
|
||||
```yaml
|
||||
docker run -d --rm --network=host \
|
||||
-v local_path/config:/app/config \
|
||||
-v local_path/db:/app/db \
|
||||
--mount type=tmpfs,target=/app/api \
|
||||
-e PUID=200 -e PGID=300 \
|
||||
-e TZ=Europe/Berlin \
|
||||
-e PORT=20211 \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
```
|
||||
|
||||
See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md).
|
||||
|
||||
### Docker environment variables
|
||||
|
||||
| Variable | Description | Example Value |
|
||||
| :------------- |:------------------------| -----:|
|
||||
| `PORT` |Port of the web interface | `20211` |
|
||||
| `PUID` |Application User UID | `102` |
|
||||
| `PGID` |Application User GID | `82` |
|
||||
| `LISTEN_ADDR` |Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. | `0.0.0.0` |
|
||||
|`TZ` |Time zone to display stats correctly. Find your time zone [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) | `Europe/Berlin` |
|
||||
|`LOADED_PLUGINS` | Default [plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) to load. Plugins cannot be loaded with `APP_CONF_OVERRIDE`, you need to use this variable instead and then specify the plugins settings with `APP_CONF_OVERRIDE`. | `["PIHOLE","ASUSWRT"]` |
|
||||
|`APP_CONF_OVERRIDE` | JSON override for settings (except `LOADED_PLUGINS`). | `{"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20212"}` |
|
||||
|`ALWAYS_FRESH_INSTALL` | ⚠ If `true` will delete the content of the `/db` & `/config` folders. For testing purposes. Can be coupled with [watchtower](https://github.com/containrrr/watchtower) to have an always freshly installed `netalertx`/`netalertx-dev` image. | `true` |
|
||||
|
||||
> You can override the default GraphQL port setting `GRAPHQL_PORT` (set to `20212`) by using the `APP_CONF_OVERRIDE` env variable. `LOADED_PLUGINS` and settings in `APP_CONF_OVERRIDE` can be specified via the UI as well.
|
||||
|
||||
### Docker paths
|
||||
|
||||
> [!NOTE]
|
||||
> See also [Backup strategies](https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md).
|
||||
|
||||
| Required | Path | Description |
|
||||
| :------------- | :------------- | :-------------|
|
||||
| ✅ | `:/app/config` | Folder which will contain the `app.conf` & `devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
|
||||
| ✅ | `:/app/db` | Folder which will contain the `app.db` database file |
|
||||
| | `:/app/log` | Logs folder useful for debugging if you have issues setting up the container |
|
||||
| | `:/app/api` | A simple [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. |
|
||||
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
|
||||
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
|
||||
|
||||
> Use separate `db` and `config` directories, do not nest them.
|
||||
|
||||
### Initial setup
|
||||
|
||||
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
|
||||
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/app/config/` folder directly
|
||||
|
||||
#### Setting up scanners
|
||||
|
||||
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.
|
||||
|
||||
If you are running PiHole you can synchronize devices directly. Check the [PiHole configuration guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PIHOLE_GUIDE.md) for details.
|
||||
|
||||
> [!NOTE]
|
||||
> You can bulk-import devices via the [CSV import method](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md).
|
||||
|
||||
#### Community guides
|
||||
|
||||
You can read or watch several [community configuration guides](https://github.com/jokob-sk/NetAlertX/blob/main/docs/COMMUNITY_GUIDES.md) in Chinese, Korean, German, or French.
|
||||
|
||||
> Please note these might be outdated. Rely on official documentation first.
|
||||
|
||||
#### Common issues
|
||||
|
||||
- Before creating a new issue, please check if a similar issue was [already resolved](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed).
|
||||
- Check also common issues and [debugging tips](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md).
|
||||
|
||||
## 💙 Support me
|
||||
|
||||
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
|
||||
| --- | --- | --- |
|
||||
|
||||
- Bitcoin: `1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM`
|
||||
- Ethereum: `0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7`
|
||||
|
||||
> 📧 Email me at [netalertx@gmail.com](mailto:netalertx@gmail.com?subject=NetAlertX Donations) if you want to get in touch or if I should add other sponsorship platforms.
|
||||
[](https://hub.docker.com/r/jokobsk/netalertx)
|
||||
[](https://hub.docker.com/r/jokobsk/netalertx)
|
||||
[](https://github.com/jokob-sk/NetAlertX/releases)
|
||||
[](https://discord.gg/NczTUTWyRr)
|
||||
[](https://my.home-assistant.io/redirect/supervisor_add_addon_repository/?repository_url=https%3A%2F%2Fgithub.com%2Falexbelgium%2Fhassio-addons)
|
||||
|
||||
# NetAlertX - Network scanner & notification framework
|
||||
|
||||
| [📑 Docker guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md) | [🚀 Releases](https://github.com/jokob-sk/NetAlertX/releases) | [📚 Docs](https://jokob-sk.github.io/NetAlertX/) | [🔌 Plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) | [🤖 Ask AI](https://gurubase.io/g/netalertx)
|
||||
|----------------------| ----------------------| ----------------------| ----------------------| ----------------------|
|
||||
|
||||
<a href="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" target="_blank">
|
||||
<img src="https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/docs/img/GENERAL/github_social_image.jpg" width="1000px" />
|
||||
</a>
|
||||
|
||||
Head to [https://netalertx.com/](https://netalertx.com/) for more gifs and screenshots 📷.
|
||||
|
||||
> [!NOTE]
|
||||
> There is also an experimental 🧪 [bare-metal install](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md) method available.
|
||||
|
||||
## 📕 Basic Usage
|
||||
|
||||
> [!WARNING]
|
||||
> You will have to run the container on the `host` network and specify `SCAN_SUBNETS` unless you use other [plugin scanners](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.
|
||||
|
||||
```bash
|
||||
docker run -d --rm --network=host \
|
||||
-v /local_data_dir:/data \
|
||||
-v /etc/localtime:/etc/localtime \
|
||||
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
|
||||
-e PORT=20211 \
|
||||
-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"} \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
```
|
||||
|
||||
See alternative [docked-compose examples](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_COMPOSE.md).
|
||||
|
||||
### Default ports
|
||||
|
||||
| Default | Description | How to override |
|
||||
| :------------- |:-------------------------------| ----------------------------------------------------------------------------------:|
|
||||
| `20211` |Port of the web interface | `-e PORT=20222` |
|
||||
| `20212` |Port of the backend API server | `-e APP_CONF_OVERRIDE={"GRAPHQL_PORT":"20214"}` or via the `GRAPHQL_PORT` Setting |
|
||||
|
||||
### Docker environment variables
|
||||
|
||||
| Variable | Description | Example Value |
|
||||
| :------------- |:------------------------| -----:|
|
||||
| `PORT` |Port of the web interface | `20211` |
|
||||
| `LISTEN_ADDR` |Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. | `0.0.0.0` |
|
||||
|`LOADED_PLUGINS` | Default [plugins](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md) to load. Plugins cannot be loaded with `APP_CONF_OVERRIDE`, you need to use this variable instead and then specify the plugins settings with `APP_CONF_OVERRIDE`. | `["PIHOLE","ASUSWRT"]` |
|
||||
|`APP_CONF_OVERRIDE` | JSON override for settings (except `LOADED_PLUGINS`). | `{"SCAN_SUBNETS":"['192.168.1.0/24 --interface=eth1']","GRAPHQL_PORT":"20212"}` |
|
||||
|`ALWAYS_FRESH_INSTALL` | ⚠ If `true` will delete the content of the `/db` & `/config` folders. For testing purposes. Can be coupled with [watchtower](https://github.com/containrrr/watchtower) to have an always freshly installed `netalertx`/`netalertx-dev` image. | `true` |
|
||||
|
||||
> You can override the default GraphQL port setting `GRAPHQL_PORT` (set to `20212`) by using the `APP_CONF_OVERRIDE` env variable. `LOADED_PLUGINS` and settings in `APP_CONF_OVERRIDE` can be specified via the UI as well.
|
||||
|
||||
### Docker paths
|
||||
|
||||
> [!NOTE]
|
||||
> See also [Backup strategies](https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md).
|
||||
|
||||
| Required | Path | Description |
|
||||
| :------------- | :------------- | :-------------|
|
||||
| ✅ | `:/data` | Folder which will contain the `/db/app.db`, `/config/app.conf` & `/config/devices.csv` ([read about devices.csv](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md)) files |
|
||||
| ✅ | `/etc/localtime:/etc/localtime:ro` | Ensuring the timezone is teh same as on teh server. |
|
||||
| | `:/tmp/log` | Logs folder useful for debugging if you have issues setting up the container |
|
||||
| | `:/tmp/api` | The [API endpoint](https://github.com/jokob-sk/NetAlertX/blob/main/docs/API.md) containing static (but regularly updated) json and other files. Path configurable via `NETALERTX_API` environment variable. |
|
||||
| | `:/app/front/plugins/<plugin>/ignore_plugin` | Map a file `ignore_plugin` to ignore a plugin. Plugins can be soft-disabled via settings. More in the [Plugin docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PLUGINS.md). |
|
||||
| | `:/etc/resolv.conf` | Use a custom `resolv.conf` file for [better name resolution](https://github.com/jokob-sk/NetAlertX/blob/main/docs/REVERSE_DNS.md). |
|
||||
|
||||
> Use separate `db` and `config` directories, do not nest them.
|
||||
|
||||
### Initial setup
|
||||
|
||||
- If unavailable, the app generates a default `app.conf` and `app.db` file on the first run.
|
||||
- The preferred way is to manage the configuration via the Settings section in the UI, if UI is inaccessible you can modify [app.conf](https://github.com/jokob-sk/NetAlertX/tree/main/back) in the `/data/config/` folder directly
|
||||
|
||||
#### Setting up scanners
|
||||
|
||||
You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default `ARPSCAN` plugin, you have to specify at least one valid subnet and interface in the `SCAN_SUBNETS` setting. See the documentation on [How to set up multiple SUBNETS, VLANs and what are limitations](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md) for troubleshooting and more advanced scenarios.
|
||||
|
||||
If you are running PiHole you can synchronize devices directly. Check the [PiHole configuration guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/PIHOLE_GUIDE.md) for details.
|
||||
|
||||
> [!NOTE]
|
||||
> You can bulk-import devices via the [CSV import method](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEVICES_BULK_EDITING.md).
|
||||
|
||||
#### Community guides
|
||||
|
||||
You can read or watch several [community configuration guides](https://github.com/jokob-sk/NetAlertX/blob/main/docs/COMMUNITY_GUIDES.md) in Chinese, Korean, German, or French.
|
||||
|
||||
> Please note these might be outdated. Rely on official documentation first.
|
||||
|
||||
#### Common issues
|
||||
|
||||
- Before creating a new issue, please check if a similar issue was [already resolved](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed).
|
||||
- Check also common issues and [debugging tips](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DEBUG_TIPS.md).
|
||||
|
||||
## 💙 Support me
|
||||
|
||||
| [](https://github.com/sponsors/jokob-sk) | [](https://www.buymeacoffee.com/jokobsk) | [](https://www.patreon.com/user?u=84385063) |
|
||||
| --- | --- | --- |
|
||||
|
||||
- Bitcoin: `1N8tupjeCK12qRVU2XrV17WvKK7LCawyZM`
|
||||
- Ethereum: `0x6e2749Cb42F4411bc98501406BdcD82244e3f9C7`
|
||||
|
||||
> 📧 Email me at [netalertx@gmail.com](mailto:netalertx@gmail.com?subject=NetAlertX Donations) if you want to get in touch or if I should add other sponsorship platforms.
|
||||
203
docs/DOCKER_MAINTENANCE.md
Normal file
203
docs/DOCKER_MAINTENANCE.md
Normal file
@@ -0,0 +1,203 @@
|
||||
# The NetAlertX Container Operator's Guide
|
||||
|
||||
> [!WARNING]
|
||||
> ⚠️ **Important:** The docker-compose has recently changed. Carefully read the [Migration guide](https://jokob-sk.github.io/NetAlertX/MIGRATION/?h=migrat#12-migration-from-netalertx-v25524) for detailed instructions.
|
||||
|
||||
This guide assumes you are starting with the official `docker-compose.yml` file provided with the project. We strongly recommend you start with or migrate to this file as your baseline and modify it to suit your specific needs (e.g., changing file paths). While there are many ways to configure NetAlertX, the default file is designed to meet the mandatory security baseline with layer-2 networking capabilities while operating securely and without startup warnings.
|
||||
|
||||
This guide provides direct, concise solutions for common NetAlertX administrative tasks. It is structured to help you identify a problem, implement the solution, and understand the details.
|
||||
|
||||
## Guide Contents
|
||||
|
||||
- Using a Local Folder for Configuration
|
||||
- Migrating from a Local Folder to a Docker Volume
|
||||
- Applying a Custom Nginx Configuration
|
||||
- Mounting Additional Files for Plugins
|
||||
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> Other relevant resources
|
||||
> - [Fixing Permission Issues](./FILE_PERMISSIONS.md)
|
||||
> - [Handling Backups](./BACKUPS.md)
|
||||
> - [Accessing Application Logs](./LOGGING.md)
|
||||
|
||||
---
|
||||
|
||||
## Task: Using a Local Folder for Configuration
|
||||
|
||||
### Problem
|
||||
|
||||
You want to edit your `app.conf` and other configuration files directly from your host machine, instead of using a Docker-managed volume.
|
||||
|
||||
### Solution
|
||||
|
||||
1. Stop the container:
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
2. (Optional but Recommended) Back up your data using the method in Part 1.
|
||||
3. Create a local folder on your host machine (e.g., `/data/netalertx_config`).
|
||||
4. Edit `docker-compose.yml`:
|
||||
|
||||
* **Comment out** the `netalertx_config` volume entry.
|
||||
* **Uncomment** and **set the path** for the "Example custom local folder" bind mount entry.
|
||||
|
||||
```yaml
|
||||
...
|
||||
volumes:
|
||||
# - type: volume
|
||||
# source: netalertx_config
|
||||
# target: /data/config
|
||||
# read_only: false
|
||||
...
|
||||
# Example custom local folder called /data/netalertx_config
|
||||
- type: bind
|
||||
source: /data/netalertx_config
|
||||
target: /data/config
|
||||
read_only: false
|
||||
...
|
||||
```
|
||||
5. (Optional) Restore your backup.
|
||||
6. Restart the container:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### About This Method
|
||||
|
||||
This replaces the Docker-managed volume with a "bind mount." This is a direct mapping between a folder on your host computer (`/data/netalertx_config`) and a folder inside the container (`/data/config`), allowing you to edit the files directly.
|
||||
|
||||
---
|
||||
|
||||
## Task: Migrating from a Local Folder to a Docker Volume
|
||||
|
||||
### Problem
|
||||
|
||||
You are currently using a local folder (bind mount) for your configuration (e.g., `/data/netalertx_config`) and want to switch to the recommended Docker-managed volume (`netalertx_config`).
|
||||
|
||||
### Solution
|
||||
|
||||
1. Stop the container:
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
2. Edit `docker-compose.yml`:
|
||||
|
||||
* **Comment out** the bind mount entry for your local folder.
|
||||
* **Uncomment** the `netalertx_config` volume entry.
|
||||
|
||||
```yaml
|
||||
...
|
||||
volumes:
|
||||
- type: volume
|
||||
source: netalertx_config
|
||||
target: /data/config
|
||||
read_only: false
|
||||
...
|
||||
# Example custom local folder called /data/netalertx_config
|
||||
# - type: bind
|
||||
# source: /data/netalertx_config
|
||||
# target: /data/config
|
||||
# read_only: false
|
||||
...
|
||||
```
|
||||
3. (Optional) Initialize the volume:
|
||||
|
||||
```bash
|
||||
docker-compose up -d && docker-compose down
|
||||
```
|
||||
4. Run the migration command (**replace `/data/netalertx_config` with your actual path**):
|
||||
|
||||
```bash
|
||||
docker run --rm -v netalertx_config:/config -v /data/netalertx_config:/local-config alpine \
|
||||
sh -c "tar -C /local-config -c . | tar -C /config -x"
|
||||
```
|
||||
5. Restart the container:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### About This Method
|
||||
|
||||
This uses a temporary `alpine` container that mounts *both* your source folder (`/local-config`) and destination volume (`/config`). The `tar ... | tar ...` command safely copies all files, including hidden ones, preserving structure.
|
||||
|
||||
---
|
||||
|
||||
## Task: Applying a Custom Nginx Configuration
|
||||
|
||||
### Problem
|
||||
|
||||
You need to override the default Nginx configuration to add features like LDAP, SSO, or custom SSL settings.
|
||||
|
||||
### Solution
|
||||
|
||||
1. Stop the container:
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
2. Create your custom config file on your host (e.g., `/data/my-netalertx.conf`).
|
||||
3. Edit `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
...
|
||||
# Use a custom Enterprise-configured nginx config for ldap or other settings
|
||||
- /data/my-netalertx.conf:/tmp/nginx/active-config/netalertx.conf:ro
|
||||
...
|
||||
```
|
||||
4. Restart the container:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### About This Method
|
||||
|
||||
Docker’s bind mount overlays your host file (`my-netalertx.conf`) on top of the default file inside the container. The container remains read-only, but Nginx reads your file as if it were the default.
|
||||
|
||||
---
|
||||
|
||||
## Task: Mounting Additional Files for Plugins
|
||||
|
||||
### Problem
|
||||
|
||||
A plugin (like `DHCPLSS`) needs to read a file from your host machine (e.g., `/var/lib/dhcp/dhcpd.leases`).
|
||||
|
||||
### Solution
|
||||
|
||||
1. Stop the container:
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
2. Edit `docker-compose.yml` and add a new line under the `volumes:` section:
|
||||
|
||||
```yaml
|
||||
...
|
||||
volumes:
|
||||
...
|
||||
# Mount for DHCPLSS plugin
|
||||
- /var/lib/dhcp/dhcpd.leases:/mnt/dhcpd.leases:ro
|
||||
...
|
||||
```
|
||||
3. Restart the container:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
4. In the NetAlertX web UI, configure the plugin to read from:
|
||||
|
||||
```
|
||||
/mnt/dhcpd.leases
|
||||
```
|
||||
|
||||
### About This Method
|
||||
|
||||
This maps your host file to a new, read-only (`:ro`) location inside the container. The plugin can then safely read this file without exposing anything else on your host filesystem.
|
||||
|
||||
|
||||
104
docs/DOCKER_PORTAINER.md
Executable file
104
docs/DOCKER_PORTAINER.md
Executable file
@@ -0,0 +1,104 @@
|
||||
# Deploying NetAlertX in Portainer (via Stacks)
|
||||
|
||||
This guide shows you how to set up **NetAlertX** using Portainer’s **Stacks** feature.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## 1. Prepare Your Host
|
||||
|
||||
Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace `APP_FOLDER` with your preferred location, for example `/local_data_dir` here:
|
||||
|
||||
```bash
|
||||
mkdir -p /local_data_dir/netalertx/config
|
||||
mkdir -p /local_data_dir/netalertx/db
|
||||
mkdir -p /local_data_dir/netalertx/log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Open Portainer Stacks
|
||||
|
||||
1. Log in to your **Portainer UI**.
|
||||
2. Navigate to **Stacks** → **Add stack**.
|
||||
3. Give your stack a name (e.g., `netalertx`).
|
||||
|
||||
---
|
||||
|
||||
## 3. Paste the Stack Configuration
|
||||
|
||||
Copy and paste the following YAML into the **Web editor**:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
# Use this line for stable release
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
# Or, use this for the latest development build
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
cap_drop: # Drop all capabilities for enhanced security
|
||||
- ALL
|
||||
cap_add: # Re-add necessary capabilities
|
||||
- NET_RAW
|
||||
- NET_ADMIN
|
||||
- NET_BIND_SERVICE
|
||||
volumes:
|
||||
- ${APP_FOLDER}/netalertx/config:/data/config
|
||||
- ${APP_FOLDER}/netalertx/db:/data/db
|
||||
# to sync with system time
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
tmpfs:
|
||||
# All writable runtime state resides under /tmp; comment out to persist logs between restarts
|
||||
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
environment:
|
||||
- PORT=${PORT}
|
||||
- APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Configure Environment Variables
|
||||
|
||||
In the **Environment variables** section of Portainer, add the following:
|
||||
|
||||
* `APP_FOLDER=/local_data_dir` (or wherever you created the directories in step 1)
|
||||
* `PORT=22022` (or another port if needed)
|
||||
* `APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22023"}` (optional advanced settings, otherwise the backend API server PORT defaults to `20212`)
|
||||
|
||||
---
|
||||
|
||||
## 5. Ensure permissions
|
||||
|
||||
> [!TIP]
|
||||
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
|
||||
>
|
||||
> `sudo chown -R 20211:20211 /local_data_dir`
|
||||
>
|
||||
> `sudo chmod -R a+rwx /local_data_dir`
|
||||
>
|
||||
|
||||
|
||||
---
|
||||
|
||||
## 6. Deploy the Stack
|
||||
|
||||
1. Scroll down and click **Deploy the stack**.
|
||||
2. Portainer will pull the image and start NetAlertX.
|
||||
3. Once running, access the app at:
|
||||
|
||||
```
|
||||
http://<your-docker-host-ip>:22022
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Verify and Troubleshoot
|
||||
|
||||
* Check logs via Portainer → **Containers** → `netalertx` → **Logs**.
|
||||
* Logs are stored under `${APP_FOLDER}/netalertx/log` if you enabled that volume.
|
||||
|
||||
Once the application is running, configure it by reading the [initial setup](INITIAL_SETUP.md) guide, or [troubleshoot common issues](COMMON_ISSUES.md).
|
||||
@@ -41,15 +41,7 @@ Use the following Compose snippet to deploy NetAlertX with a **static LAN IP** a
|
||||
services:
|
||||
netalertx:
|
||||
image: ghcr.io/jokob-sk/netalertx:latest
|
||||
ports:
|
||||
- 20211:20211
|
||||
volumes:
|
||||
- /mnt/YOUR_SERVER/netalertx/config:/app/config:rw
|
||||
- /mnt/YOUR_SERVER/netalertx/db:/netalertx/app/db:rw
|
||||
- /mnt/YOUR_SERVER/netalertx/logs:/netalertx/app/log:rw
|
||||
environment:
|
||||
- TZ=Europe/London
|
||||
- PORT=20211
|
||||
...
|
||||
networks:
|
||||
swarm-ipvlan:
|
||||
ipv4_address: 192.168.1.240 # ⚠️ Choose a free IP from your LAN
|
||||
|
||||
@@ -1,23 +1,96 @@
|
||||
# Managing File Permissions for NetAlertX on Nginx with Docker
|
||||
# Managing File Permissions for NetAlertX on a Read-Only Container
|
||||
|
||||
Sometimes, permission issues arise if your existing host directories were created by a previous container running as root or another UID. The container will fail to start with "Permission Denied" errors.
|
||||
|
||||
> [!TIP]
|
||||
> If you are facing permission issues, try to start the container without mapping your volumes. If that works, then the issue is permission related. You can try e.g., the following command:
|
||||
> ```
|
||||
> docker run -d --rm --network=host \
|
||||
> -e TZ=Europe/Berlin \
|
||||
> -e PUID=200 -e PGID=200 \
|
||||
> -e PORT=20211 \
|
||||
> ghcr.io/jokob-sk/netalertx:latest
|
||||
> ```
|
||||
NetAlertX runs on an Nginx web server. On Alpine Linux, Nginx operates as the `nginx` user (if PUID and GID environment variables are not specified, nginx user UID will be set to 102, and its supplementary group `www-data` ID to 82). Consequently, files accessed or written by the NetAlertX application are owned by `nginx:www-data`.
|
||||
> NetAlertX runs in a **secure, read-only Alpine-based container** under a dedicated `netalertx` user (UID 20211, GID 20211). All writable paths are either mounted as **persistent volumes** or **`tmpfs` filesystems**. This ensures consistent file ownership and prevents privilege escalation.
|
||||
|
||||
Upon starting, NetAlertX changes nginx user UID and www-data GID to specified values (or defaults), and the ownership of files on the host system mapped to `/app/config` and `/app/db` in the container to `nginx:www-data`. This ensures that Nginx can access and write to these files. Since the user in the Docker container is mapped to a user on the host system by ID:GID, the files in `/app/config` and `/app/db` on the host system are owned by a user with the same ID and GID (defaults are ID 102 and GID 82). On different systems, this ID:GID may belong to different users, or there may not be a group with ID 82 at all.
|
||||
Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server.
|
||||
|
||||
Option to set specific user UID and GID can be useful for host system users needing to access these files (e.g., backup scripts).
|
||||
```bash
|
||||
docker run --rm --network=host \
|
||||
-v /etc/localtime:/etc/localtime:ro \
|
||||
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
|
||||
-e PORT=20211 \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
```
|
||||
|
||||
> [!WARNING]
|
||||
> The above should be only used as a test - once the container restarts, all data is lost.
|
||||
|
||||
---
|
||||
|
||||
## Writable Paths
|
||||
|
||||
NetAlertX requires certain paths to be writable at runtime. These paths should be mounted either as **host volumes** or **`tmpfs`** in your `docker-compose.yml` or `docker run` command:
|
||||
|
||||
| Path | Purpose | Notes |
|
||||
| ------------------------------------ | ----------------------------------- | ------------------------------------------------------ |
|
||||
| `/data/config` | Application configuration | Persistent volume recommended |
|
||||
| `/data/db` | Database files | Persistent volume recommended |
|
||||
| `/tmp/log` | Logs | Lives under `/tmp`; optional host bind to retain logs |
|
||||
| `/tmp/api` | API cache | Subdirectory of `/tmp` |
|
||||
| `/tmp/nginx/active-config` | Active nginx configuration override | Mount `/tmp` (or override specific file) |
|
||||
| `/tmp/run` | Runtime directories for nginx & PHP | Subdirectory of `/tmp` |
|
||||
| `/tmp` | PHP session save directory | Backed by `tmpfs` for runtime writes |
|
||||
|
||||
> Mounting `/tmp` as `tmpfs` automatically covers all of its subdirectories (`log`, `api`, `run`, `nginx/active-config`, etc.).
|
||||
|
||||
> All these paths will have **UID 20211 / GID 20211** inside the container. Files on the host will appear owned by `20211:20211`.
|
||||
|
||||
---
|
||||
|
||||
### Solution
|
||||
|
||||
1. **Run the container once as root** (`--user "0"`) to allow it to correct permissions automatically:
|
||||
|
||||
```bash
|
||||
docker run -it --rm --name netalertx --user "0" \
|
||||
-v /local_data_dir:/data \
|
||||
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
```
|
||||
|
||||
2. Wait for logs showing **permissions being fixed**. The container will then **hang intentionally**.
|
||||
3. Press **Ctrl+C** to stop the container.
|
||||
4. Start the container normally with your `docker-compose.yml` or `docker run` command.
|
||||
|
||||
> The container startup script detects `root` and runs `chown -R 20211:20211` on all volumes, fixing ownership for the secure `netalertx` user.
|
||||
|
||||
> [!TIP]
|
||||
> If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the `/local_data_dir/db` and `/local_data_dir/config` folders (replace `local_data_dir` with the location where your `/db` and `/config` folders are located).
|
||||
>
|
||||
> `sudo chown -R 20211:20211 /local_data_dir`
|
||||
>
|
||||
> `sudo chmod -R a+rwx /local_data_dir`
|
||||
>
|
||||
|
||||
---
|
||||
|
||||
## Example: docker-compose.yml with `tmpfs`
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
image: "ghcr.io/jokob-sk/netalertx"
|
||||
network_mode: "host"
|
||||
cap_drop: # Drop all capabilities for enhanced security
|
||||
- ALL
|
||||
cap_add: # Add only the necessary capabilities
|
||||
- NET_ADMIN # Required for ARP scanning
|
||||
- NET_RAW # Required for raw socket operations
|
||||
- NET_BIND_SERVICE # Required to bind to privileged ports (nbtscan)
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /local_data_dir:/data
|
||||
- /etc/localtime:/etc/localtime
|
||||
environment:
|
||||
- PORT=20211
|
||||
tmpfs:
|
||||
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
```
|
||||
|
||||
> This setup ensures all writable paths are either in `tmpfs` or host-mounted, and the container never writes outside of controlled volumes.
|
||||
|
||||
### Permissions Table for Individual Folders
|
||||
|
||||
| Folder | User | User ID | Group | Group ID | Permissions | Notes |
|
||||
|----------------|--------|---------|-----------|----------|-------------|---------------------------------------------------------------------|
|
||||
| `/app/config` | nginx | PUID (default 102) | www-data | PGID (default 82) | rwxr-xr-x | Ensure `nginx` can read/write; other users can read if in `www-data` |
|
||||
| `/app/db` | nginx | PUID (default 102) | www-data | PGID (default 82) | rwxr-xr-x | Same as above |
|
||||
|
||||
@@ -31,10 +31,11 @@ To improve presence accuracy and reduce false offline states:
|
||||
|
||||
### ✅ Increase ARP Scan Timeout
|
||||
|
||||
Extend the ARP scanner timeout to ensure full subnet coverage:
|
||||
Extend the ARP scanner timeout and DURATION to ensure full subnet coverage:
|
||||
|
||||
```env
|
||||
ARPSCAN_RUN_TIMEOUT=360
|
||||
ARPSCAN_DURATION=30
|
||||
```
|
||||
|
||||
> Adjust based on your network size and device count.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NetAlertX Community Helper Scripts Overview
|
||||
# Community Helper Scripts Overview
|
||||
|
||||
This page provides an overview of community-contributed scripts for NetAlertX. These scripts are not actively maintained and are provided as-is.
|
||||
|
||||
@@ -14,8 +14,8 @@ You can find all scripts in this [scripts GitHub folder](https://github.com/joko
|
||||
|
||||
## Important Notes
|
||||
|
||||
> [!NOTE]
|
||||
> These scripts are community-supplied and not actively maintained. Use at your own discretion.
|
||||
> [!NOTE]
|
||||
> These scripts are community-supplied and not actively maintained. Use at your own discretion.
|
||||
|
||||
For detailed usage instructions, refer to each script's documentation in each [scripts GitHub folder](https://github.com/jokob-sk/NetAlertX/tree/main/scripts).
|
||||
|
||||
|
||||
@@ -5,51 +5,85 @@ To download and install NetAlertX on the hardware/server directly use the `curl`
|
||||
> [!NOTE]
|
||||
> This is an Experimental feature 🧪 and it relies on community support.
|
||||
>
|
||||
> 🙏 Looking for maintainers for this installation method 🙂 Current community volunteers:
|
||||
> 🙏 Looking for maintainers for this installation method 🙂 Current community volunteers:
|
||||
> - [slammingprogramming](https://github.com/slammingprogramming)
|
||||
> - [ingoratsdorf](https://github.com/ingoratsdorf)
|
||||
>
|
||||
> There is no guarantee that the install script or any other script will gracefully handle other installed software.
|
||||
> Data loss is a possibility, **it is recommended to install NetAlertX using the supplied Docker image**.
|
||||
|
||||
A warning to the installation method below: Piping to bash is [controversial](https://pi-hole.net/2016/07/25/curling-and-piping-to-bash) and may
|
||||
be dangerous, as you cannot see the code that's about to be executed on your system.
|
||||
> [!WARNING]
|
||||
> A warning to the installation method below: Piping to bash is [controversial](https://pi-hole.net/2016/07/25/curling-and-piping-to-bash) and may be dangerous, as you cannot see the code that's about to be executed on your system.
|
||||
|
||||
Alternatively you can download the installation script `install/install.debian.sh` from the repository and check the code yourself (beware other scripts are
|
||||
downloaded too - only from this repo).
|
||||
If you trust this repo, you can download the install script via one of the methods (curl/wget) below and it will fo its best to install NetAlertX on your system.
|
||||
|
||||
Alternatively you can download the installation script from the repository and check the code yourself.
|
||||
|
||||
NetAlertX will be installed in `/app` and run on port number `20211`.
|
||||
|
||||
Some facts about what and where something will be changed/installed by the HW install setup (may not contain everything!):
|
||||
|
||||
- dependencies will be installed from the respective system repos
|
||||
- required python modules will be installed
|
||||
- `/app` directory will be deleted and newly created
|
||||
- `/app` will contain the whole repository (downloaded by `install/install.debian.sh`)
|
||||
- `/app` will contain the whole repository (downloaded by the install script)
|
||||
- The default NGINX site `/etc/nginx/sites-enabled/default` will be disabled (sym-link deleted or backed up to `sites-available`)
|
||||
- `/var/www/html/netalertx` directory will be deleted and newly created
|
||||
- `/etc/nginx/conf.d/netalertx.conf` will be sym-linked to `/app/install/netalertx.debian.conf`
|
||||
- `/etc/nginx/conf.d/netalertx.conf` will be sym-linked to the appropriate installer location (depending on your system installer script)
|
||||
- Some files (IEEE device vendors info, ...) will be created in the directory where the installation script is executed
|
||||
|
||||
## Limitations
|
||||
|
||||
- No system service is provided. NetAlertX must be started using `/app/install/start.debian.sh`.
|
||||
- No system service is provided. NetAlertX must be started using `/app/install/<system>/start.<system>.sh`.
|
||||
- No checks for other running software is done.
|
||||
- Only tested to work on Debian Bookworm (Debian 12).
|
||||
- Only tested to work on the system listed in the install directory.
|
||||
- **EXPERIMENTAL** and not recommended way to install NetAlertX.
|
||||
|
||||
## 📥 Installation via CURL
|
||||
|
||||
> [!TIP]
|
||||
> [!TIP]
|
||||
> If the below fails try grabbing and installing one of the [previous releases](https://github.com/jokob-sk/NetAlertX/releases) and run the installation from the zip package.
|
||||
|
||||
```bash
|
||||
curl -o install.debian.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/install.debian.sh && sudo chmod +x install.debian.sh && sudo ./install.debian.sh
|
||||
```
|
||||
|
||||
## 📥 Installation via WGET
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/install.debian.sh -O install.debian.sh && sudo chmod +x install.debian.sh && sudo ./install.debian.sh
|
||||
```
|
||||
|
||||
These commands will download the `install.debian.sh` script from the GitHub repository, make it executable with `chmod`, and then run it using `./install.debian.sh`.
|
||||
These commands will download the `install.debian12.sh` script from the GitHub repository, make it executable with `chmod`, and then run it using `./install.debian12.sh`.
|
||||
|
||||
Make sure you have the necessary permissions to execute the script.
|
||||
|
||||
|
||||
## 📥 Debian 12 (Bookworm)
|
||||
|
||||
### Installation via curl
|
||||
```bash
|
||||
curl -o install.debian12.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh
|
||||
```
|
||||
|
||||
### Installation via wget
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh -O install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh
|
||||
```
|
||||
|
||||
## 📥 Ubuntu 24 (Noble Numbat)
|
||||
|
||||
> [!NOTE]
|
||||
> Maintained by [ingoratsdorf](https://github.com/ingoratsdorf)
|
||||
|
||||
### Installation via curl
|
||||
```bash
|
||||
curl -o install.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh && sudo chmod +x install.sh && sudo ./install.sh
|
||||
```
|
||||
|
||||
### Installation via wget
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh -O install.sh && sudo chmod +x install.sh && sudo ./install.sh
|
||||
```
|
||||
|
||||
## 📥 Bare Metal - Proxmox
|
||||
|
||||
> [!NOTE]
|
||||
> Use this on a clean LXC/VM for Debian 13 OR Ubuntu 24.
|
||||
> The Scipt will detect OS and build acordingly.
|
||||
> Maintained by [JVKeller](https://github.com/JVKeller)
|
||||
|
||||
### Installation via wget
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh
|
||||
```
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
NetAlertX can be installed several ways. The best supported option is Docker, followed by a supervised Home Assistant instance, as an Unraid app, and lastly, on bare metal.
|
||||
|
||||
- [[Installation] Docker (recommended)](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md)
|
||||
- [[Installation] Docker (recommended)](https://github.com/jokob-sk/NetAlertX/blob/main/docs/DOCKER_INSTALLATION.md)
|
||||
- [[Installation] Home Assistant](https://github.com/alexbelgium/hassio-addons/tree/master/netalertx)
|
||||
- [[Installation] Unraid App](https://unraid.net/community/apps)
|
||||
- [[Installation] Bare metal (experimental - looking for maintainers)](https://github.com/jokob-sk/NetAlertX/blob/main/docs/HW_INSTALL.md)
|
||||
|
||||
@@ -1,17 +1,15 @@
|
||||
# Logging
|
||||
|
||||
NetAlertX comes with several logs that help to identify application issues.
|
||||
|
||||
For plugin-specific log debugging, please read the [Debug Plugins](./DEBUG_PLUGINS.md) guide.
|
||||
|
||||
When debugging any issue, increase the `LOG_LEVEL` Setting as per the [Debug tips](./DEBUG_TIPS.md) documentation.
|
||||
NetAlertX comes with several logs that help to identify application issues. These include nginx logs, app, or plugin logs. For plugin-specific log debugging, please read the [Debug Plugins](./DEBUG_PLUGINS.md) guide.
|
||||
|
||||
> [!NOTE]
|
||||
> When debugging any issue, increase the `LOG_LEVEL` Setting as per the [Debug tips](./DEBUG_TIPS.md) documentation.
|
||||
|
||||
## Main logs
|
||||
|
||||
You can find most of the logs exposed in the UI under _Maintenance -> Logs_.
|
||||
|
||||
If the UI is inaccessible, you can access them under `/app/log`.
|
||||
If the UI is inaccessible, you can access them under `/tmp/log`.
|
||||
|
||||

|
||||
|
||||
@@ -24,3 +22,54 @@ If a Plugin supplies data to the main app it's done either vie a SQL query or vi
|
||||
The data is in most of the cases then displayed in the application under _Integrations -> Plugins_ (or _Device -> Plugins_ if the plugin is supplying device-specific data).
|
||||
|
||||

|
||||
|
||||
## Viewing Logs on the File System
|
||||
|
||||
You cannot find any log files on the filesystem. The container is `read-only` and writes logs to a temporary in-memory filesystem (`tmpfs`) for security and performance. The application follows container best-practices by writing all logs to the standard output (`stdout`) and standard error (`stderr`) streams. Docker's logging driver (set in `docker-compose.yml`) captures this stream automatically, allowing you to access it with the `docker logs <image_name>` command.
|
||||
|
||||
* **To see all logs since the last restart:**
|
||||
|
||||
```bash
|
||||
docker logs netalertx
|
||||
```
|
||||
* **To watch the logs live (live feed):**
|
||||
|
||||
```bash
|
||||
docker logs -f netalertx
|
||||
```
|
||||
## Enabling Persistent File-Based Logs
|
||||
|
||||
The default logs are erased every time the container restarts because they are stored in temporary in-memory storage (`tmpfs`). If you need to keep a persistent, file-based log history, follow the steps below.
|
||||
|
||||
> [!NOTE]
|
||||
> This might lead to performance degradation so this approach is only suggested when actively debugging issues. See the [Performance optimization](./PERFORMANCE.md) documentation for details.
|
||||
|
||||
1. Stop the container:
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
2. Edit your `docker-compose.yml` file:
|
||||
|
||||
* **Comment out** the `/tmp/log` line under the `tmpfs:` section.
|
||||
* **Uncomment** the "Retain logs" line under the `volumes:` section and set your desired host path.
|
||||
|
||||
```yaml
|
||||
...
|
||||
tmpfs:
|
||||
# - "/tmp/log:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
...
|
||||
volumes:
|
||||
...
|
||||
# Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts
|
||||
- /home/adam/netalertx_logs:/tmp/log
|
||||
...
|
||||
```
|
||||
3. Restart the container:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
This change stops Docker from mounting a temporary in-memory volume at `/tmp/log`. Instead, it "bind mounts" a persistent folder from your host computer (e.g., `/data/netalertx_logs`) to that *same location* inside the container.
|
||||
|
||||
@@ -1,138 +1,292 @@
|
||||
# Migration form PiAlert to NetAlertX
|
||||
# Migration
|
||||
|
||||
> [!WARNING]
|
||||
> Follow this guide only after you you downloaded and started a version of NetAlertX prior to v25.6.7 (e.g. `docker pull ghcr.io/jokob-sk/netalertx:25.5.24`) at least once after previously using the PiAlert image. Later versions don't support migration and devices and settings will have to migrated manually, e.g. via [CSV import](./DEVICES_BULK_EDITING.md).
|
||||
When upgrading from older versions of NetAlertX (or PiAlert by jokob-sk), follow the migration steps below to ensure your data and configuration are properly transferred.
|
||||
|
||||
## STEPS:
|
||||
> [!TIP]
|
||||
> It's always important to have a [backup strategy](./BACKUPS.md) in place.
|
||||
|
||||
> [!TIP]
|
||||
> In short: The application will auto-migrate the database, config, and all device information. A ticker message on top will be displayed until you update your docker mount points. It's always good to have a [backup strategy](./BACKUPS.md) in place.
|
||||
## Migration scenarios
|
||||
|
||||
1. Backup your current config and database (optional `devices.csv` to have a backup) (See bellow tip if facing issues)
|
||||
2. Stop the container
|
||||
2. Update the Docker file mount locations in your `docker-compose.yml` or docker run command (See bellow **New Docker mount locations**).
|
||||
3. Rename the DB and conf files to `app.db` and `app.conf` and place them in the appropriate location.
|
||||
4. Start the Container
|
||||
- You are running PiAlert (by jokob-sk)
|
||||
→ [Read the 1.1 Migration from PiAlert to NetAlertX `v25.5.24`](#11-migration-from-pialert-to-netalertx-v25524)
|
||||
|
||||
- You are running NetAlertX (by jokob-sk) `25.5.24` or older
|
||||
→ [Read the 1.2 Migration from NetAlertX `v25.5.24`](#12-migration-from-netalertx-v25524)
|
||||
|
||||
- You are running NetAlertX (by jokob-sk) (`v25.6.7` to `v25.10.1`)
|
||||
→ [Read the 1.3 Migration from NetAlertX `v25.10.1`](#13-migration-from-netalertx-v25101)
|
||||
|
||||
|
||||
> [!TIP]
|
||||
> If you have troubles accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: `cp -r /app/config /home/pi/pialert/config/old_backup_files`. This should create a folder in the `config` directory called `old_backup_files` conatining all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.
|
||||
### 1.0 Manual Migration
|
||||
|
||||
You can migrate data manually, for example by exporting and importing devices using the [CSV import](./DEVICES_BULK_EDITING.md) method.
|
||||
|
||||
|
||||
### New Docker mount locations
|
||||
### 1.1 Migration from PiAlert to NetAlertX `v25.5.24`
|
||||
|
||||
The application installation folder in the docker container has changed from `/home/pi/pialert` to `/app`. That means the new mount points are:
|
||||
#### STEPS:
|
||||
|
||||
| Old mount point | New mount point |
|
||||
|----------------------|---------------|
|
||||
| `/home/pi/pialert/config` | `/app/config` |
|
||||
| `/home/pi/pialert/db` | `/app/db` |
|
||||
The application will automatically migrate the database, configuration, and all device information.
|
||||
A banner message will appear at the top of the web UI reminding you to update your Docker mount points.
|
||||
|
||||
1. Stop the container
|
||||
2. [Back up your setup](./BACKUPS.md)
|
||||
3. Update the Docker file mount locations in your `docker-compose.yml` or docker run command (See below **New Docker mount locations**).
|
||||
4. Rename the DB and conf files to `app.db` and `app.conf` and place them in the appropriate location.
|
||||
5. Start the container
|
||||
|
||||
|
||||
> [!TIP]
|
||||
> If you have trouble accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: `cp -r /data/config /home/pi/pialert/config/old_backup_files`. This should create a folder in the `config` directory called `old_backup_files` containing all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.
|
||||
|
||||
#### New Docker mount locations
|
||||
|
||||
The internal application path in the container has changed from `/home/pi/pialert` to `/app`. Update your volume mounts as follows:
|
||||
|
||||
| Old mount point | New mount point |
|
||||
|----------------------|---------------|
|
||||
| `/home/pi/pialert/config` | `/data/config` |
|
||||
| `/home/pi/pialert/db` | `/data/db` |
|
||||
|
||||
|
||||
If you were mounting files directly, please note the file names have changed:
|
||||
|
||||
| Old file name | New file name |
|
||||
|----------------------|---------------|
|
||||
| Old file name | New file name |
|
||||
|----------------------|---------------|
|
||||
| `pialert.conf` | `app.conf` |
|
||||
| `pialert.db` | `app.db` |
|
||||
|
||||
|
||||
> [!NOTE]
|
||||
> The application uses symlinks linking the old db and config locations to the new ones, so data loss should not occur. [Backup strategies](./BACKUPS.md) are still recommended to backup your setup.
|
||||
> [!NOTE]
|
||||
> The application automatically creates symlinks from the old database and config locations to the new ones, so data loss should not occur. Read the [backup strategies](./BACKUPS.md) guide to backup your setup.
|
||||
|
||||
|
||||
# Examples
|
||||
#### Examples
|
||||
|
||||
Examples of docker files with the new mount points.
|
||||
|
||||
## Example 1: Mapping folders
|
||||
##### Example 1: Mapping folders
|
||||
|
||||
### Old docker-compose.yml
|
||||
###### Old docker-compose.yml
|
||||
|
||||
```yaml
|
||||
services:
|
||||
pialert:
|
||||
container_name: pialert
|
||||
# use the below line if you want to test the latest dev image
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "jokobsk/pialert:latest"
|
||||
network_mode: "host"
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "jokobsk/pialert:latest"
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- local/path/config:/home/pi/pialert/config
|
||||
- local/path/db:/home/pi/pialert/db
|
||||
- /local_data_dir/config:/home/pi/pialert/config
|
||||
- /local_data_dir/db:/home/pi/pialert/db
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- local/path/logs:/home/pi/pialert/front/log
|
||||
- /local_data_dir/logs:/home/pi/pialert/front/log
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
```
|
||||
|
||||
### New docker-compose.yml
|
||||
###### New docker-compose.yml
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx: # ⚠ This has changed (🟡optional)
|
||||
container_name: netalertx # ⚠ This has changed (🟡optional)
|
||||
# use the below line if you want to test the latest dev image
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest" # ⚠ This has changed (🟡optional/🔺required in future)
|
||||
network_mode: "host"
|
||||
netalertx: # 🆕 This has changed
|
||||
container_name: netalertx # 🆕 This has changed
|
||||
image: "ghcr.io/jokob-sk/netalertx:25.5.24" # 🆕 This has changed
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- local/path/config:/app/config # ⚠ This has changed (🔺required)
|
||||
- local/path/db:/app/db # ⚠ This has changed (🔺required)
|
||||
- /local_data_dir/config:/data/config # 🆕 This has changed
|
||||
- /local_data_dir/db:/data/db # 🆕 This has changed
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- local/path/logs:/app/log # ⚠ This has changed (🟡optional)
|
||||
- /local_data_dir/logs:/tmp/log # 🆕 This has changed
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
```
|
||||
|
||||
|
||||
## Example 2: Mapping files
|
||||
##### Example 2: Mapping files
|
||||
|
||||
> [!NOTE]
|
||||
> The recommendation is to map folders as in Example 1, map files directly only when needed.
|
||||
> [!NOTE]
|
||||
> The recommendation is to map folders as in Example 1, map files directly only when needed.
|
||||
|
||||
### Old docker-compose.yml
|
||||
###### Old docker-compose.yml
|
||||
|
||||
```yaml
|
||||
services:
|
||||
pialert:
|
||||
container_name: pialert
|
||||
# use the below line if you want to test the latest dev image
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "jokobsk/pialert:latest"
|
||||
network_mode: "host"
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "jokobsk/pialert:latest"
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- local/path/config/pialert.conf:/home/pi/pialert/config/pialert.conf
|
||||
- local/path/db/pialert.db:/home/pi/pialert/db/pialert.db
|
||||
- /local_data_dir/config/pialert.conf:/home/pi/pialert/config/pialert.conf
|
||||
- /local_data_dir/db/pialert.db:/home/pi/pialert/db/pialert.db
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- local/path/logs:/home/pi/pialert/front/log
|
||||
- /local_data_dir/logs:/home/pi/pialert/front/log
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
```
|
||||
|
||||
### New docker-compose.yml
|
||||
###### New docker-compose.yml
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx: # ⚠ This has changed (🟡optional)
|
||||
container_name: netalertx # ⚠ This has changed (🟡optional)
|
||||
# use the below line if you want to test the latest dev image
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest" # ⚠ This has changed (🟡optional/🔺required in future)
|
||||
network_mode: "host"
|
||||
netalertx: # 🆕 This has changed
|
||||
container_name: netalertx # 🆕 This has changed
|
||||
image: "ghcr.io/jokob-sk/netalertx:25.5.24" # 🆕 This has changed
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- local/path/config/app.conf:/app/config/app.conf # ⚠ This has changed (🔺required)
|
||||
- local/path/db/app.db:/app/db/app.db # ⚠ This has changed (🔺required)
|
||||
- /local_data_dir/config/app.conf:/data/config/app.conf # 🆕 This has changed
|
||||
- /local_data_dir/db/app.db:/data/db/app.db # 🆕 This has changed
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- local/path/logs:/app/log # ⚠ This has changed (🟡optional)
|
||||
- /local_data_dir/logs:/tmp/log # 🆕 This has changed
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
```
|
||||
|
||||
|
||||
### 1.2 Migration from NetAlertX `v25.5.24`
|
||||
|
||||
Versions before `v25.10.1` require an intermediate migration through `v25.5.24` to ensure database compatibility. Skipping this step may cause compatibility issues due to database schema changes introduced after `v25.5.24`.
|
||||
|
||||
#### STEPS:
|
||||
|
||||
1. Stop the container
|
||||
2. [Back up your setup](./BACKUPS.md)
|
||||
3. Upgrade to `v25.5.24` by pinning the release version (See Examples below)
|
||||
4. Start the container and verify everything works as expected.
|
||||
5. Stop the container
|
||||
6. Upgrade to `v25.10.1` by pinning the release version (See Examples below)
|
||||
7. Start the container and verify everything works as expected.
|
||||
|
||||
#### Examples
|
||||
|
||||
Examples of docker files with the tagged version.
|
||||
|
||||
##### Example 1: Mapping folders
|
||||
|
||||
###### docker-compose.yml changes
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
image: "ghcr.io/jokob-sk/netalertx:25.5.24" # 🆕 This is important
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /local_data_dir/config:/data/config
|
||||
- /local_data_dir/db:/data/db
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- /local_data_dir/logs:/tmp/log
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
```
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
image: "ghcr.io/jokob-sk/netalertx:25.10.1" # 🆕 This is important
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /local_data_dir/config:/data/config
|
||||
- /local_data_dir/db:/data/db
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- /local_data_dir/logs:/tmp/log
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
```
|
||||
|
||||
### 1.3 Migration from NetAlertX `v25.10.1`
|
||||
|
||||
Starting from v25.10.1, the container uses a [more secure, read-only runtime environment](./SECURITY_FEATURES.md), which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as `tmpfs` or permanent writable volumes, with sufficient access [permissions](./FILE_PERMISSIONS.md). The data location has also hanged from `/app/db` and `/app/config` to `/data/db` and `/data/config`. See detailed steps below.
|
||||
|
||||
#### STEPS:
|
||||
|
||||
1. Stop the container
|
||||
2. [Back up your setup](./BACKUPS.md)
|
||||
3. Upgrade to `v25.10.1` by pinning the release version (See the example below)
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
image: "ghcr.io/jokob-sk/netalertx:25.10.1" # 🆕 This is important
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /local_data_dir/config:/app/config
|
||||
- /local_data_dir/db:/app/db
|
||||
# (optional) useful for debugging if you have issues setting up the container
|
||||
- /local_data_dir/logs:/tmp/log
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
```
|
||||
|
||||
4. Start the container and verify everything works as expected.
|
||||
5. Stop the container.
|
||||
6. Perform a one-off migration to the latest `netalertx` image and `20211` user:
|
||||
|
||||
> [!NOTE]
|
||||
> The example below assumes your `/config` and `/db` folders are stored in `local_data_dir`.
|
||||
> Replace this path with your actual configuration directory. `netalertx` is the container name, which might differ from your setup.
|
||||
|
||||
```sh
|
||||
docker run -it --rm --name netalertx --user "0" \
|
||||
-v /local_data_dir/config:/data/config \
|
||||
-v /local_data_dir/db:/data/db \
|
||||
--tmpfs /tmp:uid=20211,gid=20211,mode=1700 \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
```
|
||||
|
||||
...or alternatively execute:
|
||||
|
||||
```bash
|
||||
sudo chown -R 20211:20211 /local_data_dir
|
||||
sudo chmod -R a+rwx /local_data_dir
|
||||
```
|
||||
|
||||
7. Stop the container
|
||||
8. Update the `docker-compose.yml` as per example below.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
image: "ghcr.io/jokob-sk/netalertx" # 🆕 This has changed
|
||||
network_mode: "host"
|
||||
cap_drop: # 🆕 New line
|
||||
- ALL # 🆕 New line
|
||||
cap_add: # 🆕 New line
|
||||
- NET_RAW # 🆕 New line
|
||||
- NET_ADMIN # 🆕 New line
|
||||
- NET_BIND_SERVICE # 🆕 New line
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /local_data_dir:/data # 🆕 This folder contains your /db and /config directories and the parent changed from /app to /data
|
||||
# Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured
|
||||
- /etc/localtime:/etc/localtime:ro # 🆕 New line
|
||||
environment:
|
||||
- PORT=20211
|
||||
# 🆕 New "tmpfs" section START 🔽
|
||||
tmpfs:
|
||||
# All writable runtime state resides under /tmp; comment out to persist logs between restarts
|
||||
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
# 🆕 New "tmpfs" section END 🔼
|
||||
```
|
||||
|
||||
9. Start the container and verify everything works as expected.
|
||||
@@ -1,6 +1,6 @@
|
||||
## How to Set Up Your Network Page
|
||||
|
||||
The **Network** page lets you map how devices connect — visually and logically.
|
||||
The **Network** page lets you map how devices connect — visually and logically.
|
||||
It’s especially useful for planning infrastructure, assigning parent-child relationships, and spotting gaps.
|
||||
|
||||

|
||||
@@ -9,11 +9,11 @@ To get started, you’ll need to define at least one root node and mark certain
|
||||
|
||||
---
|
||||
|
||||
Start by creating a root device with the MAC address `Internet`, if the application didn’t create one already.
|
||||
This special MAC address (`Internet`) is required for the root network node — no other value is currently supported.
|
||||
Start by creating a root device with the MAC address `Internet`, if the application didn’t create one already.
|
||||
This special MAC address (`Internet`) is required for the root network node — no other value is currently supported.
|
||||
Set its **Type** to a valid network type — such as `Router` or `Gateway`.
|
||||
|
||||
> [!TIP]
|
||||
> [!TIP]
|
||||
> If you don’t have one, use the [Create new device](./DEVICE_MANAGEMENT.md#dummy-devices) button on the **Devices** page to add a root device.
|
||||
|
||||
---
|
||||
@@ -21,15 +21,15 @@ Set its **Type** to a valid network type — such as `Router` or `Gateway`.
|
||||
## ⚡ Quick Setup
|
||||
|
||||
1. Open the device you want to use as a network node (e.g. a Switch).
|
||||
2. Set its **Type** to one of the following:
|
||||
`AP`, `Firewall`, `Gateway`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
|
||||
2. Set its **Type** to one of the following:
|
||||
`AP`, `Firewall`, `Gateway`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
|
||||
*(Or add custom types under **Settings → General → `NETWORK_DEVICE_TYPES`**.)*
|
||||
3. Save the device.
|
||||
4. Go to the **Network** page — supported device types will appear as tabs.
|
||||
5. Use the **Assign** button to connect unassigned devices to a network node.
|
||||
6. If the **Port** is `0` or empty, a Wi-Fi icon is shown. Otherwise, an Ethernet icon appears.
|
||||
|
||||
> [!NOTE]
|
||||
> [!NOTE]
|
||||
> Use [bulk editing](./DEVICES_BULK_EDITING.md) with _CSV Export_ to fix `Internet` root assignments or update many devices at once.
|
||||
|
||||
---
|
||||
@@ -42,20 +42,22 @@ Let’s walk through setting up a device named `raspberrypi` to act as a network
|
||||
|
||||
### 1. Set Device Type and Parent
|
||||
|
||||
- Go to the **Devices** page
|
||||
- Go to the **Devices** page
|
||||
- Open the device detail view for `raspberrypi`
|
||||
- In the **Type** dropdown, select `Switch`
|
||||
|
||||

|
||||
|
||||
- Optionally assign a **Parent Node** (where this device connects to) and the **Relationship type** of the connection.
|
||||
- Optionally assign a **Parent Node** (where this device connects to) and the **Relationship type** of the connection.
|
||||
The `nic` relationship type can affect parent notifications — see the setting description and [Notifications documentation](./NOTIFICATIONS.md) for more.
|
||||
- A device’s parent MAC will be overwritten by plugins if its current value is any of the following: "null", "(unknown)" "(Unknown)".
|
||||
- If you want plugins to be able to overwrite the parent value (for example, when mixing plugins that do not provide parent MACs like `ARPSCAN` with those that do, like `UNIFIAPI`), you must set the setting `NEWDEV_devParentMAC` to None.
|
||||
|
||||

|
||||

|
||||
|
||||
> [!NOTE]
|
||||
> Only certain device types can act as network nodes:
|
||||
> `AP`, `Firewall`, `Gateway`, `Hypervisor`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
|
||||
> [!NOTE]
|
||||
> Only certain device types can act as network nodes:
|
||||
> `AP`, `Firewall`, `Gateway`, `Hypervisor`, `PLC`, `Powerline`, `Router`, `Switch`, `USB LAN Adapter`, `USB WIFI Adapter`, `WLAN`
|
||||
> You can add custom types via the `NETWORK_DEVICE_TYPES` setting.
|
||||
|
||||
- Click **Save**
|
||||
@@ -81,7 +83,7 @@ You can confirm that `raspberrypi` now acts as a network device in two places:
|
||||
### 3. Assign Connected Devices
|
||||
|
||||
- Use the **Assign** button to link other devices (e.g. PCs) to `raspberrypi`.
|
||||
- After assigning, connected devices will appear beneath the `raspberrypi` switch node.
|
||||
- After assigning, connected devices will appear beneath the `raspberrypi` switch node.
|
||||
|
||||

|
||||
|
||||
@@ -91,6 +93,12 @@ You can confirm that `raspberrypi` now acts as a network device in two places:
|
||||
|
||||
> Hovering over devices in the tree reveals connection details and tooltips for quick inspection.
|
||||
|
||||
> [!NOTE]
|
||||
> Selecting certain relationship types hides the device in the default device views.
|
||||
> You can change this behavior by adjusting the `UI_hide_rel_types` setting, which by default is set to `["nic","virtual"]`.
|
||||
> This means devices with `devParentRelType` set to `nic` or `virtual` will not be shown.
|
||||
> All devices, regardless of relationship type, are always accessible in the **All devices** view.
|
||||
|
||||
---
|
||||
|
||||
## ✅ Summary
|
||||
|
||||
@@ -44,14 +44,19 @@ In Notification Processing settings, you can specify blanket rules. These allow
|
||||
|
||||
1. Notify on (`NTFPRCS_INCLUDED_SECTIONS`) allows you to specify which events trigger notifications. Usual setups will have `new_devices`, `down_devices`, and possibly `down_reconnected` set. Including `plugin` (dependenton the Plugin `<plugin>_WATCH` and `<plugin>_REPORT_ON` settings) and `events` (dependent on the on-device **Alert Events** setting) might be too noisy for most setups. More info in the [NTFPRCS plugin](https://github.com/jokob-sk/NetAlertX/blob/main/front/plugins/notification_processing/README.md) on what events these selections include.
|
||||
2. Alert down after (`NTFPRCS_alert_down_time`) is useful if you want to wait for some time before the system sends out a down notification for a device. This is related to the on-device **Alert down** setting and only devices with this checked will trigger a down notification.
|
||||
3. A filter to allow you to set device-specific exceptions to New devices being added to the app.
|
||||
4. A filter to allow you to set device-specific exceptions to generated Events.
|
||||
|
||||
## Ignoring devices 🔕
|
||||
You can filter out unwanted notifications globally. This could be because of a misbehaving device (GoogleNest/GoogleHub (See also [ARPSAN docs and the `--exclude-broadcast` flag](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/arp_scan#ip-flipping-on-google-nest-devices))) which flips between IP addresses, or because you want to ignore new device notifications of a certain pattern.
|
||||
|
||||
1. Events Filter (`NTFPRCS_event_condition`) - Filter out Events from notifications.
|
||||
2. New Devices Filter (`NTFPRCS_new_dev_condition`) - Filter out New Devices from notifications, but log and keep a new device in the system.
|
||||
|
||||
## Ignoring devices 💻
|
||||
|
||||

|
||||
|
||||
You can completely ignore detected devices globally. This could be because your instance detects docker containers, you want to ignore devices from a specific manufacturer via MAC rules or you want to ignore devices on a specific IP range.
|
||||
|
||||
1. Ignored MACs (`NEWDEV_ignored_MACs`) - List of MACs to ignore.
|
||||
2. Ignored IPs (`NEWDEV_ignored_IPs`) - List of IPs to ignore.
|
||||
2. Ignored IPs (`NEWDEV_ignored_IPs`) - List of IPs to ignore.
|
||||
|
||||
|
||||
|
||||
@@ -1,47 +1,50 @@
|
||||
# Performance Optimization Guide
|
||||
|
||||
There are several ways to improve the application's performance. The application has been tested on a range of devices, from a Raspberry Pi 4 to NAS and NUC systems. If you are running the application on a lower-end device, carefully fine-tune the performance settings to ensure an optimal user experience.
|
||||
There are several ways to improve the application's performance. The application has been tested on a range of devices, from Raspberry Pi 4 units to NAS and NUC systems. If you are running the application on a lower-end device, fine-tuning the performance settings can significantly improve the user experience.
|
||||
|
||||
## Common Causes of Slowness
|
||||
|
||||
Performance issues are usually caused by:
|
||||
|
||||
- **Incorrect settings** – The app may restart unexpectedly. Check `app.log` under **Maintenance → Logs** for details.
|
||||
- **Too many background processes** – Disable unnecessary scanners.
|
||||
- **Long scan durations** – Limit the number of scanned devices.
|
||||
- **Excessive disk operations** – Optimize scanning and logging settings.
|
||||
- **Failed maintenance plugins** – Ensure maintenance tasks are running properly.
|
||||
* **Incorrect settings** – The app may restart unexpectedly. Check `app.log` under **Maintenance → Logs** for details.
|
||||
* **Too many background processes** – Disable unnecessary scanners.
|
||||
* **Long scan durations** – Limit the number of scanned devices.
|
||||
* **Excessive disk operations** – Optimize scanning and logging settings.
|
||||
* **Maintenance plugin failures** – If cleanup tasks fail, performance can degrade over time.
|
||||
|
||||
The application performs regular maintenance and database cleanup. If these tasks fail, performance may degrade.
|
||||
The application performs regular maintenance and database cleanup. If these tasks are failing, you will see slowdowns.
|
||||
|
||||
### Database and Log File Size
|
||||
|
||||
A large database or oversized log files can slow down performance. You can check database and table sizes on the **Maintenance** page.
|
||||
A large database or oversized log files can impact performance. You can check database and table sizes on the **Maintenance** page.
|
||||
|
||||

|
||||
|
||||
> [!NOTE]
|
||||
> - For **~100 devices**, the database should be around **50MB**.
|
||||
> - No table should exceed **10,000 rows** in a healthy system.
|
||||
> - These numbers vary based on network activity and settings.
|
||||
>
|
||||
> * For **~100 devices**, the database should be around **50 MB**.
|
||||
> * No table should exceed **10,000 rows** in a healthy system.
|
||||
> * Actual values vary based on network activity and plugin settings.
|
||||
|
||||
---
|
||||
|
||||
## Maintenance Plugins
|
||||
|
||||
Two plugins help maintain the application’s performance:
|
||||
Two plugins help maintain the system’s performance:
|
||||
|
||||
### **1. Database Cleanup (DBCLNP)**
|
||||
- Responsible for database maintenance.
|
||||
- Check settings in the [DB Cleanup Plugin Docs](/front/plugins/db_cleanup/README.md).
|
||||
- Ensure it’s not failing by checking logs.
|
||||
- Adjust the schedule (`DBCLNP_RUN_SCHD`) and timeout (`DBCLNP_RUN_TIMEOUT`) if needed.
|
||||
|
||||
* Handles database maintenance and cleanup.
|
||||
* See the [DB Cleanup Plugin Docs](/front/plugins/db_cleanup/README.md).
|
||||
* Ensure it’s not failing by checking logs.
|
||||
* Adjust the schedule (`DBCLNP_RUN_SCHD`) and timeout (`DBCLNP_RUN_TIMEOUT`) if necessary.
|
||||
|
||||
### **2. Maintenance (MAINT)**
|
||||
- Handles log cleanup and other maintenance tasks.
|
||||
- Check settings in the [Maintenance Plugin Docs](/front/plugins/maintenance/README.md).
|
||||
- Ensure it’s running correctly by checking logs.
|
||||
- Adjust the schedule (`MAINT_RUN_SCHD`) and timeout (`MAINT_RUN_TIMEOUT`) if needed.
|
||||
|
||||
* Cleans logs and performs general maintenance tasks.
|
||||
* See the [Maintenance Plugin Docs](/front/plugins/maintenance/README.md).
|
||||
* Verify proper operation via logs.
|
||||
* Adjust the schedule (`MAINT_RUN_SCHD`) and timeout (`MAINT_RUN_TIMEOUT`) if needed.
|
||||
|
||||
---
|
||||
|
||||
@@ -50,47 +53,56 @@ Two plugins help maintain the application’s performance:
|
||||
Frequent scans increase resource usage, network traffic, and database read/write cycles.
|
||||
|
||||
### **Optimizations**
|
||||
- **Increase scan intervals** (`<PLUGIN>_RUN_SCHD`) on busy networks or low-end hardware.
|
||||
- **Extend scan timeouts** (`<PLUGIN>_RUN_TIMEOUT`) to prevent failures.
|
||||
- **Reduce the subnet size** – e.g., from `/16` to `/24` to lower scan loads.
|
||||
|
||||
Some plugins have additional options to limit the number of scanned devices. If certain plugins take too long to complete, check if you can optimize scan times by selecting a scan range.
|
||||
* **Increase scan intervals** (`<PLUGIN>_RUN_SCHD`) on busy networks or low-end hardware.
|
||||
* **Increase timeouts** (`<PLUGIN>_RUN_TIMEOUT`) to avoid plugin failures.
|
||||
* **Reduce subnet size** – e.g., use `/24` instead of `/16` to reduce scan load.
|
||||
|
||||
For example, the **ICMP plugin** allows you to specify a regular expression to scan only IPs that match a specific pattern.
|
||||
Some plugins also include options to limit which devices are scanned. If certain plugins consistently run long, consider narrowing their scope.
|
||||
|
||||
For example, the **ICMP plugin** allows scanning only IPs that match a specific regular expression.
|
||||
|
||||
---
|
||||
|
||||
## Storing Temporary Files in Memory
|
||||
|
||||
On systems with slower I/O speeds, you can optimize performance by storing temporary files in memory. This primarily applies to the `/app/api` and `/app/log` folders.
|
||||
On devices with slower I/O, you can improve performance by storing temporary files (and optionally the database) in memory using `tmpfs`.
|
||||
|
||||
Using `tmpfs` reduces disk writes and improves performance. However, it should be **disabled** if persistent logs or API data storage are required.
|
||||
> [!WARNING]
|
||||
> Storing the **database** in `tmpfs` is generally discouraged. Use this only if device data and historical records are not required to persist. If needed, you can pair this setup with the `SYNC` plugin to store important persistent data on another node. See the [Plugins docs](./PLUGINS.md) for details.
|
||||
|
||||
Below is an optimized `docker-compose.yml` snippet:
|
||||
Using `tmpfs` reduces disk writes and speeds up I/O, but **all data stored in memory will be lost on restart**.
|
||||
|
||||
Below is an optimized `docker-compose.yml` snippet using non-persistent logs, API data, and DB:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
# Uncomment the line below to test the latest dev image
|
||||
# Use this line for the stable release
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
# Or use this line for the latest development build
|
||||
# image: "ghcr.io/jokob-sk/netalertx-dev:latest"
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
network_mode: "host"
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- local/path/config:/app/config
|
||||
- local/path/db:/app/db
|
||||
# (Optional) Useful for debugging setup issues
|
||||
- local/path/logs:/app/log
|
||||
# (API: OPTION 1) Store temporary files in memory (recommended for performance)
|
||||
- type: tmpfs # ◀ 🔺
|
||||
target: /app/api # ◀ 🔺
|
||||
# (API: OPTION 2) Store API data on disk (useful for debugging)
|
||||
# - local/path/api:/app/api
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
|
||||
cap_drop: # Drop all capabilities for enhanced security
|
||||
- ALL
|
||||
cap_add: # Re-add necessary capabilities
|
||||
- NET_RAW
|
||||
- NET_ADMIN
|
||||
- NET_BIND_SERVICE
|
||||
|
||||
volumes:
|
||||
- ${APP_FOLDER}/netalertx/config:/data/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
|
||||
tmpfs:
|
||||
# All writable runtime state resides under /tmp; comment out to persist logs between restarts
|
||||
- "/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime"
|
||||
- "/data/db:uid=20211,gid=20211,mode=1700" # ⚠ You will lose historical data on restart
|
||||
|
||||
environment:
|
||||
- PORT=${PORT}
|
||||
- APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}
|
||||
```
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Integration with PiHole
|
||||
|
||||
NetAlertX comes with 2 plugins suitable for integarting with your existing PiHole instace. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other [plugins](/docs/PLUGINS.md).
|
||||
NetAlertX comes with 2 plugins suitable for integrating with your existing PiHole instance. One plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine both approaches and also supplement it with other [plugins](/docs/PLUGINS.md).
|
||||
|
||||
## Approach 1: `DHCPLSS` Plugin - Import devices from the PiHole DHCP leases file
|
||||
|
||||
|
||||
@@ -64,6 +64,7 @@ Device-detecting plugins insert values into the `CurrentScan` database table. T
|
||||
| `LUCIRPC` | [luci_import](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/luci_import/) | 🔍 | Import connected devices from OpenWRT | | |
|
||||
| `MAINT` | [maintenance](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/maintenance/) | ⚙ | Maintenance of logs, etc. | | |
|
||||
| `MQTT` | [_publisher_mqtt](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_mqtt/) | ▶️ | MQTT for synching to Home Assistant | | |
|
||||
| `MTSCAN` | [mikrotik_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/mikrotik_scan/) | 🔍 | Mikrotik device import & sync | | |
|
||||
| `NBTSCAN` | [nbtscan_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/nbtscan_scan/) | 🆎 | Nbtscan (NetBIOS-based) name resolution | | |
|
||||
| `NEWDEV` | [newdev_template](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/newdev_template/) | ⚙ | New device template | | Yes |
|
||||
| `NMAP` | [nmap_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/nmap_scan/) | ♻ | Nmap port scanning & discovery | | |
|
||||
@@ -74,15 +75,17 @@ Device-detecting plugins insert values into the `CurrentScan` database table. T
|
||||
| `OMDSDN` | [omada_sdn_imp](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/omada_sdn_imp/) | 📥/🆎 ❌ | UNMAINTAINED use `OMDSDNOPENAPI` | 🖧 🔄 | |
|
||||
| `OMDSDNOPENAPI` | [omada_sdn_openapi](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/omada_sdn_openapi/) | 📥/🆎 | OMADA TP-Link import via OpenAPI | 🖧 | |
|
||||
| `PIHOLE` | [pihole_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_scan/) | 🔍/🆎/📥 | Pi-hole device import & sync | | |
|
||||
| `PIHOLEAPI` | [pihole_api_scan](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/pihole_api_scan/) | 🔍/🆎/📥 | Pi-hole device import & sync via API v6+ | | |
|
||||
| `PUSHSAFER` | [_publisher_pushsafer](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_pushsafer/) | ▶️ | Pushsafer notifications | | |
|
||||
| `PUSHOVER` | [_publisher_pushover](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_pushover/) | ▶️ | Pushover notifications | | |
|
||||
| `SETPWD` | [set_password](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/set_password/) | ⚙ | Set password | | Yes |
|
||||
| `SMTP` | [_publisher_email](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_email/) | ▶️ | Email notifications | | |
|
||||
| `SNMPDSC` | [snmp_discovery](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/snmp_discovery/) | 🔍/📥 | SNMP device import & sync | | |
|
||||
| `SYNC` | [sync](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/sync/) | 🔍/⚙/📥 | Sync & import from NetAlertX instances | 🖧 🔄 | Yes |
|
||||
| `SYNC` | [sync](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/sync/) | 🔍/⚙/📥 | Sync & import from NetAlertX instances | 🖧 🔄 | Yes |
|
||||
| `TELEGRAM` | [_publisher_telegram](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_telegram/) | ▶️ | Telegram notifications | | |
|
||||
| `UI` | [ui_settings](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/ui_settings/) | ♻ | UI specific settings | | Yes |
|
||||
| `UNFIMP` | [unifi_import](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/unifi_import/) | 🔍/📥/🆎 | UniFi device import & sync | 🖧 | |
|
||||
| `UNIFIAPI` | [unifi_api_import](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/unifi_api_import/) | 🔍/📥/🆎 | UniFi device import (SM API, multi-site) | | |
|
||||
| `VNDRPDT` | [vendor_update](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/vendor_update/) | ⚙ | Vendor database update | | |
|
||||
| `WEBHOOK` | [_publisher_webhook](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/_publisher_webhook/) | ▶️ | Webhook notifications | | |
|
||||
| `WEBMON` | [website_monitor](https://github.com/jokob-sk/NetAlertX/tree/main/front/plugins/website_monitor/) | ♻ | Website down monitoring | | |
|
||||
|
||||
@@ -10,6 +10,9 @@ NetAlertX comes with a plugin system to feed events from third-party scripts int
|
||||
|
||||
> (Currently, update/overwriting of existing objects is only supported for devices via the `CurrentScan` table.)
|
||||
|
||||
> [!NOTE]
|
||||
> For a high-level overview of how the `config.json` is used and it's lifecycle check the [config.json Lifecycle in NetAlertX Guide](PLUGINS_DEV_CONFIG.md).
|
||||
|
||||
### 🎥 Watch the video:
|
||||
|
||||
> [!TIP]
|
||||
|
||||
192
docs/PLUGINS_DEV_CONFIG.md
Executable file
192
docs/PLUGINS_DEV_CONFIG.md
Executable file
@@ -0,0 +1,192 @@
|
||||
# Plugins Implementation Details
|
||||
|
||||
Plugins provide data to the NetAlertX core, which processes it to detect changes, discover new devices, raise alerts, and apply heuristics.
|
||||
|
||||
---
|
||||
|
||||
## Overview: Plugin Data Flow
|
||||
|
||||
1. Each plugin runs on a defined schedule.
|
||||
2. Aligning all plugin schedules is recommended so they execute in the same loop.
|
||||
3. During execution, all plugins write their collected data into the **`CurrentScan`** table.
|
||||
4. After all plugins complete, the `CurrentScan` table is evaluated to detect **new devices**, **changes**, and **triggers**.
|
||||
|
||||
Although plugins run independently, they contribute to the shared `CurrentScan` table.
|
||||
To inspect its contents, set `LOG_LEVEL=trace` and check for the log section:
|
||||
|
||||
```
|
||||
================ CurrentScan table content ================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `config.json` Lifecycle
|
||||
|
||||
This section outlines how each plugin’s `config.json` manifest is read, validated, and used by the core and plugins.
|
||||
It also describes plugin output expectations and the main plugin categories.
|
||||
|
||||
> [!TIP]
|
||||
> For detailed schema and examples, see the [Plugin Development Guide](PLUGINS_DEV.md).
|
||||
|
||||
---
|
||||
|
||||
### 1. Loading
|
||||
|
||||
* On startup, the core loads `config.json` for each plugin.
|
||||
* The file acts as a **plugin manifest**, defining metadata, runtime configuration, and database mappings.
|
||||
|
||||
---
|
||||
|
||||
### 2. Validation
|
||||
|
||||
* The core validates required keys (for example, `RUN`).
|
||||
* Missing or invalid entries may be replaced with defaults or cause the plugin to be disabled.
|
||||
|
||||
---
|
||||
|
||||
### 3. Preparation
|
||||
|
||||
* Plugin parameters (paths, commands, and options) are prepared for execution.
|
||||
* Database mappings (`mapped_to_table`, `database_column_definitions`) are parsed to define how data integrates with the main app.
|
||||
|
||||
---
|
||||
|
||||
### 4. Execution
|
||||
|
||||
* Plugins may run:
|
||||
|
||||
* On a fixed schedule.
|
||||
* Once at startup.
|
||||
* After a notification or other trigger.
|
||||
* The scheduler executes plugins according to their `interval`.
|
||||
|
||||
---
|
||||
|
||||
### 5. Parsing
|
||||
|
||||
* Plugin output must be **pipe-delimited (`|`)**.
|
||||
* The core parses each output line following the **Plugin Interface Contract**, splitting and mapping fields accordingly.
|
||||
|
||||
---
|
||||
|
||||
### 6. Mapping
|
||||
|
||||
* Parsed fields are inserted into the plugin’s `Plugins_*` table.
|
||||
* Data can be mapped into other tables (e.g., `Devices`, `CurrentScan`) as defined by:
|
||||
|
||||
* `database_column_definitions`
|
||||
* `mapped_to_table`
|
||||
|
||||
**Example:** `Object_PrimaryID → devMAC`
|
||||
|
||||
---
|
||||
|
||||
### 6a. Plugin Output Contract
|
||||
|
||||
All plugins must follow the **Plugin Interface Contract** defined in `PLUGINS_DEV.md`.
|
||||
Output values are pipe-delimited in a fixed order.
|
||||
|
||||
#### Identifiers
|
||||
|
||||
* `Object_PrimaryID` and `Object_SecondaryID` uniquely identify records (for example, `MAC|IP`).
|
||||
|
||||
#### Watched Values (`Watched_Value1–4`)
|
||||
|
||||
* Used by the core to detect changes between runs.
|
||||
* Changes in these fields can trigger notifications.
|
||||
|
||||
#### Extra Field (`Extra`)
|
||||
|
||||
* Optional additional value.
|
||||
* Stored in the database but not used for alerts.
|
||||
|
||||
#### Helper Values (`Helper_Value1–3`)
|
||||
|
||||
* Optional auxiliary data (for display or plugin logic).
|
||||
* Stored but not alert-triggering.
|
||||
|
||||
#### Mapping
|
||||
|
||||
* While the output format is flexible, the plugin’s manifest determines the destination and type of each field.
|
||||
|
||||
---
|
||||
|
||||
### 7. Persistence
|
||||
|
||||
* Parsed data is **upserted** into the database.
|
||||
* Conflicts are resolved using the combined key: `Object_PrimaryID + Object_SecondaryID`.
|
||||
|
||||
---
|
||||
|
||||
## Plugin Categories
|
||||
|
||||
Plugins fall into several functional categories depending on their purpose and expected outputs.
|
||||
|
||||
### 1. Device Discovery Plugins
|
||||
|
||||
* **Inputs:** None, subnet, or discovery API.
|
||||
* **Outputs:** `MAC` and `IP` for new or updated device records in `Devices`.
|
||||
* **Mapping:** Required – usually into `CurrentScan`.
|
||||
* **Examples:** `ARPSCAN`, `NMAPDEV`.
|
||||
|
||||
---
|
||||
|
||||
### 2. Device Data Enrichment Plugins
|
||||
|
||||
* **Inputs:** Device identifiers (`MAC`, `IP`).
|
||||
* **Outputs:** Additional metadata (for example, open ports or sensors).
|
||||
* **Mapping:** Controlled via manifest definitions.
|
||||
* **Examples:** `NMAP`, `MQTT`.
|
||||
|
||||
---
|
||||
|
||||
### 3. Name Resolver Plugins
|
||||
|
||||
* **Inputs:** Device identifiers (`MAC`, `IP`, hostname`).
|
||||
* **Outputs:** Updated `devName` and `devFQDN`.
|
||||
* **Mapping:** Typically none.
|
||||
* **Note:** Adding new resolvers currently requires a core change.
|
||||
* **Examples:** `NBTSCAN`, `NSLOOKUP`.
|
||||
|
||||
---
|
||||
|
||||
### 4. Generic Plugins
|
||||
|
||||
* **Inputs:** Custom, based on the plugin logic or script.
|
||||
* **Outputs:** Data displayed under **Integrations → Plugins** only.
|
||||
* **Mapping:** Not required.
|
||||
* **Examples:** `INTRSPD`, custom monitoring scripts.
|
||||
|
||||
---
|
||||
|
||||
### 5. Configuration-Only Plugins
|
||||
|
||||
* **Inputs/Outputs:** None at runtime.
|
||||
* **Purpose:** Used for configuration or maintenance tasks.
|
||||
* **Examples:** `MAINT`, `CSVBCKP`.
|
||||
|
||||
---
|
||||
|
||||
## Post-Processing
|
||||
|
||||
After persistence:
|
||||
|
||||
* The core generates notifications for any watched value changes.
|
||||
* The UI updates with new or modified data.
|
||||
* Plugins with UI-enabled data display under **Integrations → Plugins**.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
The lifecycle of a plugin configuration is:
|
||||
|
||||
**Load → Validate → Prepare → Execute → Parse → Map → Persist → Post-process**
|
||||
|
||||
Each plugin must:
|
||||
|
||||
* Follow the **output contract**.
|
||||
* Declare its type and expected output structure.
|
||||
* Define mappings and watched values clearly in `config.json`.
|
||||
|
||||
|
||||
@@ -13,13 +13,13 @@ There is also an in-app Help / FAQ section that should be answering frequently a
|
||||
|
||||
#### 🐳 Docker (Fully supported)
|
||||
|
||||
- The main installation method is as a [docker container - follow these instructions here](https://github.com/jokob-sk/NetAlertX/blob/main/dockerfiles/README.md).
|
||||
- The main installation method is as a [docker container - follow these instructions here](./DOCKER_INSTALLATION.md).
|
||||
|
||||
#### 💻 Bare-metal / On-server (Experimental/community supported 🧪)
|
||||
|
||||
- [(Experimental 🧪) On-hardware instructions](./HW_INSTALL.md)
|
||||
- [(Experimental 🧪) On-hardware instructions](./HW_INSTALL.md)
|
||||
|
||||
- Alternative bare-metal install forks:
|
||||
- Alternative bare-metal install forks:
|
||||
- [leiweibau's fork](https://github.com/leiweibau/Pi.Alert/) (maintained)
|
||||
- [pucherot's original code](https://github.com/pucherot/Pi.Alert/) (un-maintained)
|
||||
|
||||
@@ -63,7 +63,6 @@ There is also an in-app Help / FAQ section that should be answering frequently a
|
||||
|
||||
#### ♻ Misc
|
||||
|
||||
- [Version history (legacy)](./VERSIONS_HISTORY.md)
|
||||
- [Reverse proxy (Nginx, Apache, SWAG)](./REVERSE_PROXY.md)
|
||||
- [Installing Updates](./UPDATES.md)
|
||||
- [Setting up Authelia](./AUTHELIA.md) (DRAFT)
|
||||
@@ -80,27 +79,27 @@ There is also an in-app Help / FAQ section that should be answering frequently a
|
||||
- [Frontend development tips](./FRONTEND_DEVELOPMENT.md)
|
||||
- [Webhook secrets](./WEBHOOK_SECRET.md)
|
||||
|
||||
Feel free to suggest or submit new docs via a PR.
|
||||
Feel free to suggest or submit new docs via a PR.
|
||||
|
||||
## 👨💻 Development priorities
|
||||
|
||||
Priorities from highest to lowest:
|
||||
|
||||
* 🔼 Fixing core functionality bugs not solvable with workarounds
|
||||
* 🔵 New core functionality unlocking other opportunities (e.g.: plugins)
|
||||
* 🔵 Refactoring enabling faster implementation of future functionality
|
||||
* 🔵 New core functionality unlocking other opportunities (e.g.: plugins)
|
||||
* 🔵 Refactoring enabling faster implementation of future functionality
|
||||
* 🔽 (low) UI functionality & improvements (PRs welcome 😉)
|
||||
|
||||
Design philosophy: Focus on core functionality and leverage existing apps and tools to make NetAlertX integrate into other workflows.
|
||||
Design philosophy: Focus on core functionality and leverage existing apps and tools to make NetAlertX integrate into other workflows.
|
||||
|
||||
Examples:
|
||||
Examples:
|
||||
|
||||
1. Supporting apprise makes more sense than implementing multiple individual notification gateways
|
||||
2. Implementing regular expression support across settings for validation makes more sense than validating one setting with a specific expression.
|
||||
2. Implementing regular expression support across settings for validation makes more sense than validating one setting with a specific expression.
|
||||
|
||||
UI-specific requests are a low priority as the framework picked by the original developer is not very extensible (and afaik doesn't support components) and has limited mobile support. Also, I argue the value proposition is smaller than working on something else.
|
||||
|
||||
Feel free to submit PRs if interested. try to **keep the PRs small/on-topic** so they are easier to review and approve.
|
||||
Feel free to submit PRs if interested. try to **keep the PRs small/on-topic** so they are easier to review and approve.
|
||||
|
||||
That being said, I'd reconsider if more people and or recurring sponsors file a request 😉.
|
||||
|
||||
@@ -112,8 +111,8 @@ Please be as detailed as possible with **workarounds** you considered and why a
|
||||
|
||||
If you submit a PR please:
|
||||
|
||||
1. Check that your changes are backward compatible with existing installations and with a blank setup.
|
||||
2. Existing features should always be preserved.
|
||||
1. Check that your changes are backward compatible with existing installations and with a blank setup.
|
||||
2. Existing features should always be preserved.
|
||||
3. Keep the PR small, on-topic and don't change code that is not necessary for the PR to work
|
||||
4. New features code should ideally be re-usable for different purposes, not for a very narrow use case.
|
||||
5. New functionality should ideally be implemented via the Plugins system, if possible.
|
||||
@@ -131,13 +130,13 @@ Suggested test cases:
|
||||
Some additional context:
|
||||
|
||||
* Permanent settings/config is stored in the `app.conf` file
|
||||
* Currently temporary (session?) settings are stored in the `Parameters` DB table as key-value pairs. This table is wiped during a container rebuild/restart and its values are re-initialized from cookies/session data from the browser.
|
||||
* Currently temporary (session?) settings are stored in the `Parameters` DB table as key-value pairs. This table is wiped during a container rebuild/restart and its values are re-initialized from cookies/session data from the browser.
|
||||
|
||||
## 🐛 Submitting an issue or bug
|
||||
|
||||
Before submitting a new issue please spend a couple of minutes on research:
|
||||
|
||||
* Check [🛑 Common issues](./DEBUG_TIPS.md#common-issues)
|
||||
* Check [🛑 Common issues](./DEBUG_TIPS.md#common-issues)
|
||||
* Check [💡 Closed issues](https://github.com/jokob-sk/NetAlertX/issues?q=is%3Aissue+is%3Aclosed) if a similar issue was solved in the past.
|
||||
* When submitting an issue ❗[enable debug](./DEBUG_TIPS.md)❗
|
||||
|
||||
|
||||
@@ -2,21 +2,21 @@
|
||||
|
||||
If you are running a DNS server, such as **AdGuard**, set up **Private reverse DNS servers** for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.
|
||||
|
||||
> [!TIP]
|
||||
> Before proceeding, ensure that [name resolution plugins](./NAME_RESOLUTION.md) are enabled.
|
||||
> You can customize how names are cleaned using the `NEWDEV_NAME_CLEANUP_REGEX` setting.
|
||||
> [!TIP]
|
||||
> Before proceeding, ensure that [name resolution plugins](/local_data_dir/NAME_RESOLUTION.md) are enabled.
|
||||
> You can customize how names are cleaned using the `NEWDEV_NAME_CLEANUP_REGEX` setting.
|
||||
> To auto-update Fully Qualified Domain Names (FQDN), enable the `REFRESH_FQDN` setting.
|
||||
|
||||
|
||||
> Example 1: Reverse DNS `disabled`
|
||||
>
|
||||
>
|
||||
> ```
|
||||
> jokob@Synology-NAS:/$ nslookup 192.168.1.58
|
||||
> ** server can't find 58.1.168.192.in-addr.arpa: NXDOMAIN
|
||||
> ```
|
||||
|
||||
> Example 2: Reverse DNS `enabled`
|
||||
>
|
||||
>
|
||||
> ```
|
||||
> jokob@Synology-NAS:/$ nslookup 192.168.1.58
|
||||
> 45.1.168.192.in-addr.arpa name = jokob-NUC.localdomain.
|
||||
@@ -33,22 +33,14 @@ If you are running a DNS server, such as **AdGuard**, set up **Private reverse D
|
||||
|
||||
### Specifying the DNS in the container
|
||||
|
||||
You can specify the DNS server in the docker-compose to improve name resolution on your network.
|
||||
You can specify the DNS server in the docker-compose to improve name resolution on your network.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /home/netalertx/config:/app/config
|
||||
- /home/netalertx/db:/app/db
|
||||
- /home/netalertx/log:/app/log
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
network_mode: host
|
||||
...
|
||||
dns: # specifying the DNS servers used for the container
|
||||
- 10.8.0.1
|
||||
- 10.8.0.17
|
||||
@@ -56,7 +48,7 @@ services:
|
||||
|
||||
### Using a custom resolv.conf file
|
||||
|
||||
You can configure a custom **/etc/resolv.conf** file in **docker-compose.yml** and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant [resolv.conf man](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html) entry for details.
|
||||
You can configure a custom **/etc/resolv.conf** file in **docker-compose.yml** and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant [resolv.conf man](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html) entry for details.
|
||||
|
||||
#### docker-compose.yml:
|
||||
|
||||
@@ -65,22 +57,13 @@ version: "3"
|
||||
services:
|
||||
netalertx:
|
||||
container_name: netalertx
|
||||
image: "ghcr.io/jokob-sk/netalertx:latest"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./config/app.conf:/app/config/app.conf
|
||||
- ./db:/app/db
|
||||
- ./log:/app/log
|
||||
- ./config/resolv.conf:/etc/resolv.conf # Mapping the /resolv.conf file for better name resolution
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- PORT=20211
|
||||
ports:
|
||||
- "20211:20211"
|
||||
network_mode: host
|
||||
...
|
||||
- /local_data_dir/config/resolv.conf:/etc/resolv.conf # ⚠ Mapping the /resolv.conf file for better name resolution
|
||||
...
|
||||
```
|
||||
|
||||
#### ./config/resolv.conf:
|
||||
#### /local_data_dir/config/resolv.conf:
|
||||
|
||||
The most important below is the `nameserver` entry (you can add multiple):
|
||||
|
||||
|
||||
@@ -2,9 +2,13 @@
|
||||
|
||||
> Submitted by amazing [cvc90](https://github.com/cvc90) 🙏
|
||||
|
||||
> [!NOTE]
|
||||
> There are various NGINX config files for NetAlertX, some for the bare-metal install, currently Debian 12 and Ubuntu 24 (`netalertx.conf`), and one for the docker container (`netalertx.template.conf`).
|
||||
>
|
||||
> The first one you can find in the respective bare metal installer folder `/app/install/\<system\>/netalertx.conf`.
|
||||
> The docker one can be found in the [install](https://github.com/jokob-sk/NetAlertX/tree/main/install) folder. Map, or use, the one appropriate for your setup.
|
||||
|
||||
> [!NOTE]
|
||||
> There are 2 NGINX files for NetAlertX, one for the bare-metal Debian install (`netalertx.debian.conf`), and one for the docker container (`netalertx.template.conf`). Both can be found in the [install](https://github.com/jokob-sk/NetAlertX/tree/main/install) folder. Map, or use, the one appropriate for your setup.
|
||||
<br/>
|
||||
|
||||
## NGINX HTTP Configuration (Direct Path)
|
||||
|
||||
@@ -13,22 +17,24 @@
|
||||
2. In this file, paste the following code:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
server_name netalertx;
|
||||
proxy_preserve_host on;
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
server {
|
||||
listen 80;
|
||||
server_name netalertx;
|
||||
proxy_preserve_host on;
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
|
||||
`nginx -s reload` or `systemctl restart nginx`
|
||||
|
||||
4. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/
|
||||
4. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
||||
|
||||
<br>
|
||||
5. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/
|
||||
|
||||
<br/>
|
||||
|
||||
## NGINX HTTP Configuration (Sub Path)
|
||||
|
||||
@@ -37,26 +43,28 @@
|
||||
2. In this file, paste the following code:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
server_name netalertx;
|
||||
proxy_preserve_host on;
|
||||
server {
|
||||
listen 80;
|
||||
server_name netalertx;
|
||||
proxy_preserve_host on;
|
||||
location ^~ /netalertx/ {
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`nginx -s reload` or `systemctl restart nginx`
|
||||
|
||||
4. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
|
||||
5. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
## NGINX HTTP Configuration (Sub Path) with module ngx_http_sub_module
|
||||
|
||||
@@ -65,13 +73,13 @@
|
||||
2. In this file, paste the following code:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
server_name netalertx;
|
||||
proxy_preserve_host on;
|
||||
server {
|
||||
listen 80;
|
||||
server_name netalertx;
|
||||
proxy_preserve_host on;
|
||||
location ^~ /netalertx/ {
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
||||
sub_filter_once off;
|
||||
@@ -81,18 +89,20 @@
|
||||
sub_filter '(?>$host)/js' '/netalertx/js';
|
||||
sub_filter '/img' '/netalertx/img';
|
||||
sub_filter '/lib' '/netalertx/lib';
|
||||
sub_filter '/php' '/netalertx/php';
|
||||
sub_filter '/php' '/netalertx/php';
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`nginx -s reload` or `systemctl restart nginx`
|
||||
|
||||
4. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
|
||||
5. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
**NGINX HTTPS Configuration (Direct Path)**
|
||||
|
||||
@@ -101,25 +111,27 @@
|
||||
2. In this file, paste the following code:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 443;
|
||||
server_name netalertx;
|
||||
server {
|
||||
listen 443;
|
||||
server_name netalertx;
|
||||
SSLEngine On;
|
||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
|
||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
|
||||
proxy_preserve_host on;
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_preserve_host on;
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`nginx -s reload` or `systemctl restart nginx`
|
||||
|
||||
4. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/
|
||||
5. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
**NGINX HTTPS Configuration (Sub Path)**
|
||||
|
||||
@@ -128,28 +140,30 @@
|
||||
2. In this file, paste the following code:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 443;
|
||||
server_name netalertx;
|
||||
server {
|
||||
listen 443;
|
||||
server_name netalertx;
|
||||
SSLEngine On;
|
||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
|
||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
|
||||
location ^~ /netalertx/ {
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`nginx -s reload` or `systemctl restart nginx`
|
||||
|
||||
4. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
||||
5. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
## NGINX HTTPS Configuration (Sub Path) with module ngx_http_sub_module
|
||||
|
||||
@@ -158,15 +172,15 @@
|
||||
2. In this file, paste the following code:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 443;
|
||||
server_name netalertx;
|
||||
server {
|
||||
listen 443;
|
||||
server_name netalertx;
|
||||
SSLEngine On;
|
||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
|
||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
|
||||
location ^~ /netalertx/ {
|
||||
proxy_pass http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_pass_reverse http://localhost:20211/;
|
||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
||||
sub_filter_once off;
|
||||
@@ -176,18 +190,20 @@
|
||||
sub_filter '(?>$host)/js' '/netalertx/js';
|
||||
sub_filter '/img' '/netalertx/img';
|
||||
sub_filter '/lib' '/netalertx/lib';
|
||||
sub_filter '/php' '/netalertx/php';
|
||||
sub_filter '/php' '/netalertx/php';
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`nginx -s reload` or `systemctl restart nginx`
|
||||
|
||||
4. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
||||
5. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
## Apache HTTP Configuration (Direct Path)
|
||||
|
||||
@@ -202,15 +218,17 @@
|
||||
ProxyPass / http://localhost:20211/
|
||||
ProxyPassReverse / http://localhost:20211/
|
||||
</VirtualHost>
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`a2ensite netalertx` or `service apache2 reload`
|
||||
|
||||
4. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
|
||||
5. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
## Apache HTTP Configuration (Sub Path)
|
||||
|
||||
@@ -227,15 +245,17 @@
|
||||
ProxyPassReverse / http://localhost:20211/
|
||||
}
|
||||
</VirtualHost>
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`a2ensite netalertx` or `service apache2 reload`
|
||||
|
||||
4. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
|
||||
5. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
## Apache HTTPS Configuration (Direct Path)
|
||||
|
||||
@@ -253,26 +273,28 @@
|
||||
ProxyPass / http://localhost:20211/
|
||||
ProxyPassReverse / http://localhost:20211/
|
||||
</VirtualHost>
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`a2ensite netalertx` or `service apache2 reload`
|
||||
|
||||
4. Once Apache restarts, you should be able to access the proxy website at https://netalertx/
|
||||
5. Once Apache restarts, you should be able to access the proxy website at https://netalertx/
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
## Apache HTTPS Configuration (Sub Path)
|
||||
|
||||
1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.
|
||||
|
||||
2. In this file, paste the following code:
|
||||
|
||||
|
||||
```
|
||||
<VirtualHost *:443>
|
||||
<VirtualHost *:443>
|
||||
ServerName netalertx
|
||||
SSLEngine On
|
||||
SSLEngine On
|
||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem
|
||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key
|
||||
location ^~ /netalertx/ {
|
||||
@@ -281,13 +303,17 @@
|
||||
ProxyPassReverse / http://localhost:20211/
|
||||
}
|
||||
</VirtualHost>
|
||||
```
|
||||
```
|
||||
|
||||
3. Activate the new website by running the following command:
|
||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
||||
|
||||
4. Activate the new website by running the following command:
|
||||
|
||||
`a2ensite netalertx` or `service apache2 reload`
|
||||
|
||||
4. Once Apache restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
||||
5. Once Apache restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
||||
|
||||
<br/>
|
||||
|
||||
## Reverse proxy example by using LinuxServer's SWAG container.
|
||||
|
||||
@@ -349,12 +375,13 @@ location ^~ /netalertx/ {
|
||||
}
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
## Traefik
|
||||
|
||||
> Submitted by [Isegrimm](https://github.com/Isegrimm) 🙏 (based on this [discussion](https://github.com/jokob-sk/NetAlertX/discussions/449#discussioncomment-7281442))
|
||||
|
||||
Asuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.
|
||||
Assuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.
|
||||
|
||||
Note: Everything in these configs assumes '**www.domain.com**' as your domainname and '**section31**' as an arbitrary name for your certificate setup. You will have to substitute these with your own.
|
||||
|
||||
@@ -469,15 +496,9 @@ server {
|
||||
Mapping the updated file (on the local filesystem at `/appl/docker/netalertx/default`) into the docker container:
|
||||
|
||||
|
||||
```bash
|
||||
docker run -d --rm --network=host \
|
||||
--name=netalertx \
|
||||
-v /appl/docker/netalertx/config:/app/config \
|
||||
-v /appl/docker/netalertx/db:/app/db \
|
||||
-v /appl/docker/netalertx/default:/etc/nginx/sites-available/default \
|
||||
-e TZ=Europe/Amsterdam \
|
||||
-e PORT=20211 \
|
||||
ghcr.io/jokob-sk/netalertx:latest
|
||||
|
||||
```yaml
|
||||
...
|
||||
volumes:
|
||||
- /appl/docker/netalertx/default:/etc/nginx/sites-available/default
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
85
docs/SECURITY_FEATURES.md
Normal file
85
docs/SECURITY_FEATURES.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# NetAlertX Security: A Layered Defense
|
||||
|
||||
Your network security monitor has the "keys to the kingdom," making it a prime target for attackers. If it gets compromised, the game is over.
|
||||
|
||||
NetAlertX is engineered from the ground up to prevent this. It's not just an app; it's a purpose-built **security appliance.** Its core design is built on a **zero-trust** philosophy, which is a modern way of saying we **assume a breach will happen** and plan for it. This isn't a single "lock on the door"; it's a **"defense-in-depth"** strategy, more like a medieval castle with a moat, high walls, and guards at every door.
|
||||
|
||||
Here’s a breakdown of the defensive layers you get, right out of the box using the default configuration.
|
||||
|
||||
## Feature 1: The "Digital Concrete" Filesystem
|
||||
|
||||
**Methodology:** The core application and its system files are treated as immutable. Once built, the app's code is "set in concrete," preventing attackers from modifying it or planting malware.
|
||||
|
||||
* **Immutable Filesystem:** At runtime, the container's entire filesystem is set to `read_only: true`. The application code, system libraries, and all other files are literally frozen. This single control neutralizes a massive range of common attacks.
|
||||
|
||||
* **"Ownership-as-a-Lock" Pattern:** During the build, all system files are assigned to a special `readonly` user. This user has no login shell and no power to write to *any* files, even its own. It’s a clever, defense-in-depth locking mechanism.
|
||||
|
||||
* **Data Segregation:** All user-specific data (like configurations and the device database) is stored completely outside the container in Docker volumes. The application is disposable; the data is persistent.
|
||||
|
||||
**What's this mean to you:** Even if an attacker gets in, they **cannot modify the application code or plant malware.** It's like the app is set in digital concrete.
|
||||
|
||||
## Feature 2: Surgical, "Keycard-Only" Access
|
||||
|
||||
**Methodology:** The principle of least privilege is strictly enforced. Every process gets only the absolute minimum set of permissions needed for its specific job.
|
||||
|
||||
* **Non-Privileged Execution:** The entire NetAlertX stack runs as a dedicated, low-power, non-root user (`netalertx`). No "god mode" privileges are available to the application.
|
||||
|
||||
* **Kernel-Level Capability Revocation:** The container is launched with `cap_drop: - ALL`, which tells the Linux kernel to revoke *all* "root-like" special powers.
|
||||
|
||||
* **Binary-Specific Privileges (setcap):** This is the "keycard" metaphor in action. After revoking all powers, the system uses `setcap` to grant specific, necessary permissions *only* to the binaries that absolutely require them (like `nmap` and `arp-scan`). This means that even if an attacker compromises the web server, they can't start scanning the network. The web server's "keycard" doesn't open the "scanning" door.
|
||||
|
||||
**What's this mean to you:** A security breach is **firewalled.** An attacker who gets into the web UI **does not have the "keycard"** to start scanning your network or take over the system. The breach is contained.
|
||||
|
||||
## Feature 3: Attack Surface "Amputation"
|
||||
|
||||
**Methodology:** The potential attack surface is aggressively minimized by removing every non-essential tool an attacker would want to use.
|
||||
|
||||
* **Package Manager Removal:** The `hardened` build stage explicitly deletes the Alpine package manager (`apk del apk-tools`). This makes it impossible for an attacker to simply `apk add` their malicious toolkit.
|
||||
|
||||
* **`sudo` Neutralization:** All `sudo` configurations are removed, and the `/usr/bin/sudo` command is replaced with a non-functional shim. Any attempt to escalate privileges this way will fail.
|
||||
|
||||
* **Build Toolchain Elimination:** The `Dockerfile` uses a multi-stage build. The initial "builder" stage, which contains all the powerful compilers (`gcc`) and development tools, is completely discarded. The final production image is lean and contains no build tools.
|
||||
|
||||
* **Minimal User & Group Files:** The `hardened` stage scrubs the system's `passwd` and `group` files, removing all default system users to minimize potential avenues for privilege escalation.
|
||||
|
||||
**What's this mean to you:** An attacker who breaks in finds themselves in an **empty room with no tools.** They have no `sudo` to get more power, no package manager to download weapons, and no compilers to build new ones.
|
||||
|
||||
## Feature 4: "Self-Cleaning" Writable Areas
|
||||
|
||||
**Methodology:** All writable locations are treated as untrusted, temporary, and non-executable by default.
|
||||
|
||||
* **In-Memory Volatile Storage:** The `docker-compose.yml` configuration maps all temporary directories (e.g., `/tmp/log`, `/tmp/api`, `/tmp`) to in-memory `tmpfs` filesystems. They do not exist on the host's disk.
|
||||
|
||||
* **Volatile Data:** Because these locations exist only in RAM, their contents are **instantly and irrevocably erased** when the container is stopped. This provides a "self-cleaning" mechanism that purges any attacker-dropped files or payloads on every single restart.
|
||||
|
||||
* **Secure Mount Flags:** These in-memory mounts are configured with the `noexec` flag. This is a critical security control: it **prohibits the execution of any binary or script** from a location that is writable.
|
||||
|
||||
**What's this mean to you:** Any malicious file an attacker *does* manage to drop is **written in invisible, non-permanent ink.** The file is written to RAM, not disk, so it **vaporizes the instant you restart** the container. Even worse for them, the `noexec` flag means they **can't even run the file** in the first place.
|
||||
|
||||
## Feature 5: Built-in Resource Guardrails
|
||||
|
||||
**Methodology:** The container is constrained by resource limits to function as a "good citizen" on the host system. This prevents a compromised or runaway process from consuming excessive resources, a common vector for Denial of Service (DoS) attacks.
|
||||
|
||||
* **Process Limiting:** The `docker-compose.yml` defines a `pids_limit: 512`. This directly mitigates "fork bomb" attacks, where a process attempts to crash the host by recursively spawning thousands of new processes.
|
||||
|
||||
* **Memory & CPU Limits:** The configuration file defines strict resource limits to prevent any single process from exhausting the host's available system resources.
|
||||
|
||||
**What's this mean to you:** NetAlertX is a "good neighbor" and **can't be used to crash your host machine.** Even if a process is compromised, it's in a digital straitjacket and **cannot** pull a "denial of service" attack by hogging all your CPU or memory.
|
||||
|
||||
## Feature 6: The "Pre-Flight" Self-Check
|
||||
|
||||
**Methodology:** Before any services start, NetAlertX runs a comprehensive "pre-flight" check to ensure its own security and configuration are sound. It's like a built-in auditor who verifies its own defenses.
|
||||
|
||||
* **Active Self-Diagnosis:** On every single boot, NetAlertX runs a series of startup pre-checks—and it's fast. The entire self-check process typically completes in less than a second, letting you get to the web UI in about three seconds from startup.
|
||||
|
||||
* **Validates Its Own Security:** These checks actively inspect the other security features. For example, `check-0-permissions.sh` validates that all the "Digital Concrete" files are locked down and all the "Self-Cleaning" areas are writable, just as they should be. It also checks that the correct `netalertx` user is running the show, not `root`.
|
||||
|
||||
* **Catches Misconfigurations:** This system acts as a "safety inspector" that catches misconfigurations *before* they can become security holes. If you've made a mistake in your configuration (like a bad folder permission or incorrect network mode), NetAlertX will tell you in the logs *why* it can't start, rather than just failing silently.
|
||||
|
||||
**What's this mean to you:** The system is **self-aware and checks its own work.** You get instant feedback if a setting is wrong, and you get peace of mind on every single boot knowing all these security layers are **active and verified,** all in about one second.
|
||||
|
||||
## Conclusion: Security by Default
|
||||
|
||||
No single security control is a silver bullet. The robust security posture of NetAlertX is achieved through **defense in depth**, layering these methodologies.
|
||||
|
||||
An adversary must not only gain initial access but must also find a way to write a payload to a non-executable, in-memory location, without access to any standard system tools, `sudo`, or a package manager. And they must do this while operating as an unprivileged user in a resource-limited environment where the application code is immutable and *actively checks its own integrity on every boot*.
|
||||
@@ -1,62 +1,64 @@
|
||||
# Sessions Section in Device View
|
||||
# Sessions Section – Device View
|
||||
|
||||
The **Sessions Section** provides details about a device's connection history. This data is automatically detected and cannot be edited by the user.
|
||||
The **Sessions Section** shows a device’s connection history. All data is automatically detected and **cannot be edited**.
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
## Key Fields
|
||||
|
||||
1. **Date and Time of First Connection**
|
||||
- **Description:** Displays the first detected connection time for the device.
|
||||
- **Editability:** Uneditable (auto-detected).
|
||||
- **Source:** Automatically captured when the device is first added to the system.
|
||||
|
||||
2. **Date and Time of Last Connection**
|
||||
- **Description:** Shows the most recent time the device was online.
|
||||
- **Editability:** Uneditable (auto-detected).
|
||||
- **Source:** Updated with every new connection event.
|
||||
|
||||
3. **Offline Devices with Missing or Conflicting Data**
|
||||
- **Description:** Handles cases where a device is offline but has incomplete or conflicting session data (e.g., missing start times).
|
||||
- **Handling:** The system flags these cases for review and attempts to infer missing details.
|
||||
| Field | Description | Editable? |
|
||||
| ------------------------------ | ------------------------------------------------------------------------------------------------ | --------------- |
|
||||
| **First Connection** | The first time the device was detected on the network. | ❌ Auto-detected |
|
||||
| **Last Connection** | The most recent time the device was online. | ❌ Auto-detected |
|
||||
|
||||
---
|
||||
|
||||
## How Sessions are Discovered and Calculated
|
||||
## How Session Information Works
|
||||
|
||||
### 1. Detecting New Devices
|
||||
When a device is first detected in the network, the system logs it in the events table:
|
||||
|
||||
`INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail) SELECT cur_MAC, cur_IP, '{startTime}', 'New Device', cur_Vendor, 1 FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Devices WHERE devMac = cur_MAC)`
|
||||
* New devices are automatically detected when they first appear on the network.
|
||||
* A **New Device** record is created, capturing the MAC, IP, vendor, and detection time.
|
||||
|
||||
- Devices scanned in the current cycle (**CurrentScan**) are checked against the **Devices** table.
|
||||
- If a device is new:
|
||||
- A **New Device** event is logged.
|
||||
- The device’s MAC, IP, vendor, and detection time are recorded.
|
||||
### 2. Recording Connection Sessions
|
||||
|
||||
### 2. Logging Connection Sessions
|
||||
When a new connection is detected, the system creates a session record:
|
||||
* Every time a device connects, a session entry is created.
|
||||
* Captured details include:
|
||||
|
||||
`INSERT INTO Sessions (ses_MAC, ses_IP, ses_EventTypeConnection, ses_DateTimeConnection, ses_EventTypeDisconnection, ses_DateTimeDisconnection, ses_StillConnected, ses_AdditionalInfo) SELECT cur_MAC, cur_IP, 'Connected', '{startTime}', NULL, NULL, 1, cur_Vendor FROM CurrentScan WHERE NOT EXISTS (SELECT 1 FROM Sessions WHERE ses_MAC = cur_MAC)`
|
||||
|
||||
- A new session is logged in the **Sessions** table if no prior session exists.
|
||||
- Fields like `MAC`, `IP`, `Connection Type`, and `Connection Time` are populated.
|
||||
- The `Still Connected` flag is set to `1` (active connection).
|
||||
* Connection type (wired or wireless)
|
||||
* Connection time
|
||||
* Device details (MAC, IP, vendor)
|
||||
|
||||
### 3. Handling Missing or Conflicting Data
|
||||
- Devices with incomplete or conflicting session data (e.g., missing start times) are detected.
|
||||
- The system flags these records and attempts corrections by inferring details from available data.
|
||||
|
||||
* **Triggers:**
|
||||
Devices are flagged when session data is incomplete, inconsistent, or conflicting. Examples include:
|
||||
|
||||
* Missing first or last connection timestamps
|
||||
* Overlapping session records
|
||||
* Sessions showing a device as connected and disconnected at the same time
|
||||
|
||||
* **System response:**
|
||||
|
||||
* Automatically highlights affected devices in the **Sessions Section**.
|
||||
* Attempts to **infer missing information** from available data, such as:
|
||||
|
||||
* Estimating first or last connection times from nearby session events
|
||||
* Correcting overlapping session periods
|
||||
* Reconciling conflicting connection statuses
|
||||
|
||||
* **User impact:**
|
||||
|
||||
* Users do **not** need to manually fix session data.
|
||||
* The system ensures the device’s connection history remains as accurate as possible for monitoring and reporting.
|
||||
|
||||
### 4. Updating Sessions
|
||||
- When a device reconnects, its session is updated with a new connection timestamp.
|
||||
- When a device disconnects:
|
||||
- The **Disconnection Time** is recorded.
|
||||
- The `Still Connected` flag is set to `0`.
|
||||
|
||||
The session information is then used to display the device presence under **Monitoring** -> **Presence**.
|
||||
* **Reconnect:** Updates session with the new connection timestamp.
|
||||
* **Disconnect:** Records disconnection time and marks the device as offline.
|
||||
|
||||
This session information feeds directly into **Monitoring → Presence**, providing a live view of which devices are currently online.
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user