mirror of
https://github.com/gethomepage/homepage.git
synced 2026-03-30 23:02:39 -07:00
Compare commits
28 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
af75f33e62 | ||
|
|
c32f1f1d59 | ||
|
|
11c6f587ab | ||
|
|
a44e6a8f4b | ||
|
|
9b06761964 | ||
|
|
01e30f2ecb | ||
|
|
d82fbc3026 | ||
|
|
535be37bef | ||
|
|
9326155ab8 | ||
|
|
d87d347aa3 | ||
|
|
99b50b4faf | ||
|
|
1a22065c3a | ||
|
|
e938c3ac1e | ||
|
|
11c3127aad | ||
|
|
94a934ec65 | ||
|
|
ac997ea841 | ||
|
|
60eee26ac4 | ||
|
|
c584d5d020 | ||
|
|
3d462e5958 | ||
|
|
bd1c11a716 | ||
|
|
bbb1ef5a55 | ||
|
|
cb2c7b9147 | ||
|
|
d65cb638be | ||
|
|
9367fd761b | ||
|
|
29993dad3a | ||
|
|
a15b5bd692 | ||
|
|
f7810cb67a | ||
|
|
02e1104452 |
@@ -8,6 +8,7 @@ The Kubernetes connectivity has the following requirements:
|
||||
- Kubernetes 1.19+
|
||||
- Metrics Service
|
||||
- An Ingress controller
|
||||
- Optionally: Gateway-API
|
||||
|
||||
The Kubernetes connection is configured in the `kubernetes.yaml` file. There are 3 modes to choose from:
|
||||
|
||||
@@ -19,6 +20,12 @@ The Kubernetes connection is configured in the `kubernetes.yaml` file. There are
|
||||
mode: default
|
||||
```
|
||||
|
||||
To enable Kubernetes gateway-api compatibility, add the following setting:
|
||||
|
||||
```yaml
|
||||
route: gateway
|
||||
```
|
||||
|
||||
## Services
|
||||
|
||||
Once the Kubernetes connection is configured, individual services can be configured to pull statistics. Only CPU and Memory are currently supported.
|
||||
@@ -140,6 +147,10 @@ spec:
|
||||
|
||||
If the `href` attribute is not present, Homepage will ignore the specific IngressRoute.
|
||||
|
||||
### Gateway API HttpRoute support
|
||||
|
||||
Homepage also features automatic service discovery for gateway-api. Service definitions are read by annotating the HttpRoute custom resource definition and are indentical to the Ingress example as defined in [Automatic Service Discovery](#automatic-service-discovery).
|
||||
|
||||
## Caveats
|
||||
|
||||
Similarly to Docker service discovery, there currently is no rigid ordering to discovered services and discovered services will be displayed above those specified in the `services.yaml`.
|
||||
|
||||
@@ -215,6 +215,15 @@ rules:
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
# if using gateway api add the following:
|
||||
# - apiGroups:
|
||||
# - gateway.networking.k8s.io
|
||||
# resources:
|
||||
# - httproutes
|
||||
# - gateways
|
||||
# verbs:
|
||||
# - get
|
||||
# - list
|
||||
- apiGroups:
|
||||
- metrics.k8s.io
|
||||
resources:
|
||||
|
||||
@@ -3,10 +3,12 @@ title: Beszel
|
||||
description: Beszel Widget Configuration
|
||||
---
|
||||
|
||||
Learn more about [Beszel]()
|
||||
Learn more about [Beszel](https://github.com/henrygd/beszel)
|
||||
|
||||
The widget has two modes, a single system with detailed info if `systemId` is provided, or an overview of all systems if `systemId` is not provided.
|
||||
|
||||
The `systemID` in the `id` field on the collections page of Beszel.
|
||||
|
||||
Allowed fields for 'overview' mode: `["systems", "up"]`
|
||||
Allowed fields for a single system: `["name", "status", "updated", "cpu", "memory", "disk", "network"]`
|
||||
|
||||
|
||||
@@ -98,6 +98,7 @@ You can also find a list of all available service widgets in the sidebar navigat
|
||||
- [Plex](plex.md)
|
||||
- [Portainer](portainer.md)
|
||||
- [Prometheus](prometheus.md)
|
||||
- [Prometheus Metric](prometheusmetric.md)
|
||||
- [Prowlarr](prowlarr.md)
|
||||
- [Proxmox](proxmox.md)
|
||||
- [Proxmox Backup Server](proxmoxbackupserver.md)
|
||||
|
||||
67
docs/widgets/services/prometheusmetric.md
Normal file
67
docs/widgets/services/prometheusmetric.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: Prometheus Metric
|
||||
description: Prometheus Metric Widget Configuration
|
||||
---
|
||||
|
||||
Learn more about [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/).
|
||||
|
||||
This widget can show metrics for your service defined by PromQL queries which are requested from a running Prometheus instance.
|
||||
|
||||
Quries can be defined in the `metrics` array of the widget along with a label to be used to present the metric value. You can optionally specify a global `refreshInterval` in milliseconds and/or define the `refreshInterval` per metric. Inside the optional `format` object of a metric various formatting styles and transformations can be applied (see below).
|
||||
|
||||
```yaml
|
||||
widget:
|
||||
type: prometheusmetric
|
||||
url: https://prometheus.host.or.ip
|
||||
refreshInterval: 10000 # optional - in milliseconds, defaults to 10s
|
||||
metrics:
|
||||
- label: Metric 1
|
||||
query: alertmanager_alerts{state="active"}
|
||||
- label: Metric 2
|
||||
query: apiserver_storage_size_bytes{node="mynode"}
|
||||
format:
|
||||
type: bytes
|
||||
- label: Metric 3
|
||||
query: avg(prometheus_notifications_latency_seconds)
|
||||
format:
|
||||
type: number
|
||||
suffix: s
|
||||
options:
|
||||
maximumFractionDigits: 4
|
||||
- label: Metric 4
|
||||
query: time()
|
||||
refreshInterval: 1000 # will override global refreshInterval
|
||||
format:
|
||||
type: date
|
||||
scale: 1000
|
||||
options:
|
||||
timeStyle: medium
|
||||
```
|
||||
|
||||
## Formatting
|
||||
|
||||
Supported values for `format.type` are `text`, `number`, `percent`, `bytes`, `bits`, `bbytes`, `bbits`, `byterate`, `bibyterate`, `bitrate`, `bibitrate`, `date`, `duration`, `relativeDate`, and `text` which is the default.
|
||||
|
||||
The `dateStyle` and `timeStyle` options of the `date` format are passed directly to [Intl.DateTimeFormat](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/DateTimeFormat/DateTimeFormat) and the `style` and `numeric` options of `relativeDate` are passed to [Intl.RelativeTimeFormat](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/RelativeTimeFormat/RelativeTimeFormat). For the `number` format, options of [Intl.NumberFormat](https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat/NumberFormat) can be used, e.g. `maximumFractionDigits` or `minimumFractionDigits`.
|
||||
|
||||
### Data Transformation
|
||||
|
||||
You can manipulate your metric value with the following tools: `scale`, `prefix` and `suffix`, for example:
|
||||
|
||||
```yaml
|
||||
- query: my_custom_metric{}
|
||||
label: Metric 1
|
||||
format:
|
||||
type: number
|
||||
scale: 1000 # multiplies value by a number or fraction string e.g. 1/16
|
||||
- query: my_custom_metric{}
|
||||
label: Metric 2
|
||||
format:
|
||||
type: number
|
||||
prefix: "$" # prefixes value with given string
|
||||
- query: my_custom_metric{}
|
||||
label: Metric 3
|
||||
format:
|
||||
type: number
|
||||
suffix: "€" # suffixes value with given string
|
||||
```
|
||||
20
docs/widgets/services/suwayomi.md
Normal file
20
docs/widgets/services/suwayomi.md
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
title: Suwayomi
|
||||
description: Suwayomi Widget Configuration
|
||||
---
|
||||
|
||||
Learn more about [Suwayomi](https://github.com/Suwayomi/Suwayomi-Server).
|
||||
|
||||
Allowed fields: ["download", "nondownload", "read", "unread", "downloadedread", "downloadedunread", "nondownloadedread", "nondownloadedunread"]
|
||||
|
||||
The widget defaults to the first four above. If more than four fields are provided, only the first 4 are displayed.
|
||||
Category IDs can be obtained from the url when navigating to it, `?tab={categoryID}`.
|
||||
|
||||
```yaml
|
||||
widget:
|
||||
type: suwayomi
|
||||
url: http://suwayomi.host.or.ip
|
||||
username: username #optional
|
||||
password: password #optional
|
||||
category: 0 #optional, defaults to all categories
|
||||
```
|
||||
@@ -23,6 +23,12 @@ Set the `mode` in the `kubernetes.yaml` to `cluster`.
|
||||
mode: default
|
||||
```
|
||||
|
||||
To enable Kubernetes gateway-api compatibility, set `route` to `gateway`.
|
||||
|
||||
```yaml
|
||||
route: gateway
|
||||
```
|
||||
|
||||
## Widgets
|
||||
|
||||
The Kubernetes widget can show a high-level overview of the cluster,
|
||||
|
||||
@@ -121,6 +121,7 @@ nav:
|
||||
- widgets/services/plex.md
|
||||
- widgets/services/portainer.md
|
||||
- widgets/services/prometheus.md
|
||||
- widgets/services/prometheusmetric.md
|
||||
- widgets/services/prowlarr.md
|
||||
- widgets/services/proxmox.md
|
||||
- widgets/services/proxmoxbackupserver.md
|
||||
|
||||
@@ -309,6 +309,16 @@
|
||||
"stopped": "Stopped",
|
||||
"total": "Total"
|
||||
},
|
||||
"suwayomi": {
|
||||
"download": "Downloaded",
|
||||
"nondownload": "Non-Downloaded",
|
||||
"read": "Read",
|
||||
"unread": "Unread",
|
||||
"downloadedread": "Downloaded & Read",
|
||||
"downloadedunread": "Downloaded & Unread",
|
||||
"nondownloadedread": "Non-Downloaded & Read",
|
||||
"nondownloadedunread": "Non-Downloaded & Unread"
|
||||
},
|
||||
"tailscale": {
|
||||
"address": "Address",
|
||||
"expires": "Expires",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { CoreV1Api, Metrics } from "@kubernetes/client-node";
|
||||
|
||||
import getKubeConfig from "../../../../utils/config/kubernetes";
|
||||
import getKubeArguments from "../../../../utils/config/kubernetes";
|
||||
import { parseCpu, parseMemory } from "../../../../utils/kubernetes/kubernetes-utils";
|
||||
import createLogger from "../../../../utils/logger";
|
||||
|
||||
@@ -20,7 +20,7 @@ export default async function handler(req, res) {
|
||||
const labelSelector = podSelector !== undefined ? podSelector : `${APP_LABEL}=${appName}`;
|
||||
|
||||
try {
|
||||
const kc = getKubeConfig();
|
||||
const kc = getKubeArguments().config;
|
||||
if (!kc) {
|
||||
res.status(500).send({
|
||||
error: "No kubernetes configuration",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { CoreV1Api } from "@kubernetes/client-node";
|
||||
|
||||
import getKubeConfig from "../../../../utils/config/kubernetes";
|
||||
import getKubeArguments from "../../../../utils/config/kubernetes";
|
||||
import createLogger from "../../../../utils/logger";
|
||||
|
||||
const logger = createLogger("kubernetesStatusService");
|
||||
@@ -18,7 +18,7 @@ export default async function handler(req, res) {
|
||||
}
|
||||
const labelSelector = podSelector !== undefined ? podSelector : `${APP_LABEL}=${appName}`;
|
||||
try {
|
||||
const kc = getKubeConfig();
|
||||
const kc = getKubeArguments().config;
|
||||
if (!kc) {
|
||||
res.status(500).send({
|
||||
error: "No kubernetes configuration",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { CoreV1Api, Metrics } from "@kubernetes/client-node";
|
||||
|
||||
import getKubeConfig from "../../../utils/config/kubernetes";
|
||||
import getKubeArguments from "../../../utils/config/kubernetes";
|
||||
import { parseCpu, parseMemory } from "../../../utils/kubernetes/kubernetes-utils";
|
||||
import createLogger from "../../../utils/logger";
|
||||
|
||||
@@ -8,7 +8,7 @@ const logger = createLogger("kubernetes-widget");
|
||||
|
||||
export default async function handler(req, res) {
|
||||
try {
|
||||
const kc = getKubeConfig();
|
||||
const kc = getKubeArguments().config;
|
||||
if (!kc) {
|
||||
return res.status(500).send({
|
||||
error: "No kubernetes configuration",
|
||||
|
||||
@@ -6,26 +6,50 @@ import { KubeConfig } from "@kubernetes/client-node";
|
||||
|
||||
import checkAndCopyConfig, { CONF_DIR, substituteEnvironmentVars } from "utils/config/config";
|
||||
|
||||
export default function getKubeConfig() {
|
||||
const extractKubeData = (config) => {
|
||||
// kubeconfig
|
||||
const kc = new KubeConfig();
|
||||
kc.loadFromCluster();
|
||||
|
||||
// route
|
||||
let route = "ingress";
|
||||
if (config?.route === "gateway") {
|
||||
route = "gateway";
|
||||
}
|
||||
|
||||
// traefik
|
||||
let traefik = true;
|
||||
if (config?.traefik === "disable") {
|
||||
traefik = false;
|
||||
}
|
||||
|
||||
return {
|
||||
config: kc,
|
||||
route,
|
||||
traefik,
|
||||
};
|
||||
};
|
||||
|
||||
export default function getKubeArguments() {
|
||||
checkAndCopyConfig("kubernetes.yaml");
|
||||
|
||||
const configFile = path.join(CONF_DIR, "kubernetes.yaml");
|
||||
const rawConfigData = readFileSync(configFile, "utf8");
|
||||
const configData = substituteEnvironmentVars(rawConfigData);
|
||||
const config = yaml.load(configData);
|
||||
const kc = new KubeConfig();
|
||||
let kubeData;
|
||||
|
||||
switch (config?.mode) {
|
||||
case "cluster":
|
||||
kc.loadFromCluster();
|
||||
kubeData = extractKubeData(config);
|
||||
break;
|
||||
case "default":
|
||||
kc.loadFromDefault();
|
||||
kubeData = extractKubeData(config);
|
||||
break;
|
||||
case "disabled":
|
||||
default:
|
||||
return null;
|
||||
kubeData = { config: null };
|
||||
}
|
||||
|
||||
return kc;
|
||||
return kubeData;
|
||||
}
|
||||
|
||||
@@ -3,12 +3,11 @@ import path from "path";
|
||||
|
||||
import yaml from "js-yaml";
|
||||
import Docker from "dockerode";
|
||||
import { CustomObjectsApi, NetworkingV1Api, ApiextensionsV1Api } from "@kubernetes/client-node";
|
||||
|
||||
import createLogger from "utils/logger";
|
||||
import checkAndCopyConfig, { CONF_DIR, getSettings, substituteEnvironmentVars } from "utils/config/config";
|
||||
import getDockerArguments from "utils/config/docker";
|
||||
import getKubeConfig from "utils/config/kubernetes";
|
||||
import { getUrlSchema, getRouteList } from "utils/kubernetes/kubernetes-routes";
|
||||
import * as shvl from "utils/config/shvl";
|
||||
|
||||
const logger = createLogger("service-helpers");
|
||||
@@ -151,33 +150,6 @@ export async function servicesFromDocker() {
|
||||
return mappedServiceGroups;
|
||||
}
|
||||
|
||||
function getUrlFromIngress(ingress) {
|
||||
const urlHost = ingress.spec.rules[0].host;
|
||||
const urlPath = ingress.spec.rules[0].http.paths[0].path;
|
||||
const urlSchema = ingress.spec.tls ? "https" : "http";
|
||||
return `${urlSchema}://${urlHost}${urlPath}`;
|
||||
}
|
||||
|
||||
export async function checkCRD(kc, name) {
|
||||
const apiExtensions = kc.makeApiClient(ApiextensionsV1Api);
|
||||
const exist = await apiExtensions
|
||||
.readCustomResourceDefinitionStatus(name)
|
||||
.then(() => true)
|
||||
.catch(async (error) => {
|
||||
if (error.statusCode === 403) {
|
||||
logger.error(
|
||||
"Error checking if CRD %s exists. Make sure to add the following permission to your RBAC: %d %s %s",
|
||||
name,
|
||||
error.statusCode,
|
||||
error.body.message,
|
||||
);
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
return exist;
|
||||
}
|
||||
|
||||
export async function servicesFromKubernetes() {
|
||||
const ANNOTATION_BASE = "gethomepage.dev";
|
||||
const ANNOTATION_WIDGET_BASE = `${ANNOTATION_BASE}/widget.`;
|
||||
@@ -186,128 +158,70 @@ export async function servicesFromKubernetes() {
|
||||
checkAndCopyConfig("kubernetes.yaml");
|
||||
|
||||
try {
|
||||
const kc = getKubeConfig();
|
||||
if (!kc) {
|
||||
const routeList = await getRouteList(ANNOTATION_BASE);
|
||||
|
||||
if (!routeList) {
|
||||
return [];
|
||||
}
|
||||
const networking = kc.makeApiClient(NetworkingV1Api);
|
||||
const crd = kc.makeApiClient(CustomObjectsApi);
|
||||
|
||||
const ingressList = await networking
|
||||
.listIngressForAllNamespaces(null, null, null, null)
|
||||
.then((response) => response.body)
|
||||
.catch((error) => {
|
||||
logger.error("Error getting ingresses: %d %s %s", error.statusCode, error.body, error.response);
|
||||
logger.debug(error);
|
||||
return null;
|
||||
});
|
||||
|
||||
const traefikContainoExists = await checkCRD(kc, "ingressroutes.traefik.containo.us");
|
||||
const traefikExists = await checkCRD(kc, "ingressroutes.traefik.io");
|
||||
|
||||
const traefikIngressListContaino = await crd
|
||||
.listClusterCustomObject("traefik.containo.us", "v1alpha1", "ingressroutes")
|
||||
.then((response) => response.body)
|
||||
.catch(async (error) => {
|
||||
if (traefikContainoExists) {
|
||||
logger.error(
|
||||
"Error getting traefik ingresses from traefik.containo.us: %d %s %s",
|
||||
error.statusCode,
|
||||
error.body,
|
||||
error.response,
|
||||
);
|
||||
logger.debug(error);
|
||||
}
|
||||
|
||||
return [];
|
||||
});
|
||||
|
||||
const traefikIngressListIo = await crd
|
||||
.listClusterCustomObject("traefik.io", "v1alpha1", "ingressroutes")
|
||||
.then((response) => response.body)
|
||||
.catch(async (error) => {
|
||||
if (traefikExists) {
|
||||
logger.error(
|
||||
"Error getting traefik ingresses from traefik.io: %d %s %s",
|
||||
error.statusCode,
|
||||
error.body,
|
||||
error.response,
|
||||
);
|
||||
logger.debug(error);
|
||||
}
|
||||
|
||||
return [];
|
||||
});
|
||||
|
||||
const traefikIngressList = [...(traefikIngressListContaino?.items ?? []), ...(traefikIngressListIo?.items ?? [])];
|
||||
|
||||
if (traefikIngressList.length > 0) {
|
||||
const traefikServices = traefikIngressList.filter(
|
||||
(ingress) => ingress.metadata.annotations && ingress.metadata.annotations[`${ANNOTATION_BASE}/href`],
|
||||
);
|
||||
ingressList.items.push(...traefikServices);
|
||||
}
|
||||
|
||||
if (!ingressList) {
|
||||
return [];
|
||||
}
|
||||
const services = ingressList.items
|
||||
.filter(
|
||||
(ingress) =>
|
||||
ingress.metadata.annotations &&
|
||||
ingress.metadata.annotations[`${ANNOTATION_BASE}/enabled`] === "true" &&
|
||||
(!ingress.metadata.annotations[`${ANNOTATION_BASE}/instance`] ||
|
||||
ingress.metadata.annotations[`${ANNOTATION_BASE}/instance`] === instanceName ||
|
||||
`${ANNOTATION_BASE}/instance.${instanceName}` in ingress.metadata.annotations),
|
||||
)
|
||||
.map((ingress) => {
|
||||
let constructedService = {
|
||||
app: ingress.metadata.annotations[`${ANNOTATION_BASE}/app`] || ingress.metadata.name,
|
||||
namespace: ingress.metadata.namespace,
|
||||
href: ingress.metadata.annotations[`${ANNOTATION_BASE}/href`] || getUrlFromIngress(ingress),
|
||||
name: ingress.metadata.annotations[`${ANNOTATION_BASE}/name`] || ingress.metadata.name,
|
||||
group: ingress.metadata.annotations[`${ANNOTATION_BASE}/group`] || "Kubernetes",
|
||||
weight: ingress.metadata.annotations[`${ANNOTATION_BASE}/weight`] || "0",
|
||||
icon: ingress.metadata.annotations[`${ANNOTATION_BASE}/icon`] || "",
|
||||
description: ingress.metadata.annotations[`${ANNOTATION_BASE}/description`] || "",
|
||||
external: false,
|
||||
type: "service",
|
||||
};
|
||||
if (ingress.metadata.annotations[`${ANNOTATION_BASE}/external`]) {
|
||||
constructedService.external =
|
||||
String(ingress.metadata.annotations[`${ANNOTATION_BASE}/external`]).toLowerCase() === "true";
|
||||
}
|
||||
if (ingress.metadata.annotations[`${ANNOTATION_BASE}/pod-selector`] !== undefined) {
|
||||
constructedService.podSelector = ingress.metadata.annotations[`${ANNOTATION_BASE}/pod-selector`];
|
||||
}
|
||||
if (ingress.metadata.annotations[`${ANNOTATION_BASE}/ping`]) {
|
||||
constructedService.ping = ingress.metadata.annotations[`${ANNOTATION_BASE}/ping`];
|
||||
}
|
||||
if (ingress.metadata.annotations[`${ANNOTATION_BASE}/siteMonitor`]) {
|
||||
constructedService.siteMonitor = ingress.metadata.annotations[`${ANNOTATION_BASE}/siteMonitor`];
|
||||
}
|
||||
if (ingress.metadata.annotations[`${ANNOTATION_BASE}/statusStyle`]) {
|
||||
constructedService.statusStyle = ingress.metadata.annotations[`${ANNOTATION_BASE}/statusStyle`];
|
||||
}
|
||||
Object.keys(ingress.metadata.annotations).forEach((annotation) => {
|
||||
if (annotation.startsWith(ANNOTATION_WIDGET_BASE)) {
|
||||
shvl.set(
|
||||
constructedService,
|
||||
annotation.replace(`${ANNOTATION_BASE}/`, ""),
|
||||
ingress.metadata.annotations[annotation],
|
||||
);
|
||||
const services = await Promise.all(
|
||||
routeList
|
||||
.filter(
|
||||
(route) =>
|
||||
route.metadata.annotations &&
|
||||
route.metadata.annotations[`${ANNOTATION_BASE}/enabled`] === "true" &&
|
||||
(!route.metadata.annotations[`${ANNOTATION_BASE}/instance`] ||
|
||||
route.metadata.annotations[`${ANNOTATION_BASE}/instance`] === instanceName ||
|
||||
`${ANNOTATION_BASE}/instance.${instanceName}` in route.metadata.annotations),
|
||||
)
|
||||
.map(async (route) => {
|
||||
let constructedService = {
|
||||
app: route.metadata.annotations[`${ANNOTATION_BASE}/app`] || route.metadata.name,
|
||||
namespace: route.metadata.namespace,
|
||||
href: route.metadata.annotations[`${ANNOTATION_BASE}/href`] || (await getUrlSchema(route)),
|
||||
name: route.metadata.annotations[`${ANNOTATION_BASE}/name`] || route.metadata.name,
|
||||
group: route.metadata.annotations[`${ANNOTATION_BASE}/group`] || "Kubernetes",
|
||||
weight: route.metadata.annotations[`${ANNOTATION_BASE}/weight`] || "0",
|
||||
icon: route.metadata.annotations[`${ANNOTATION_BASE}/icon`] || "",
|
||||
description: route.metadata.annotations[`${ANNOTATION_BASE}/description`] || "",
|
||||
external: false,
|
||||
type: "service",
|
||||
};
|
||||
if (route.metadata.annotations[`${ANNOTATION_BASE}/external`]) {
|
||||
constructedService.external =
|
||||
String(route.metadata.annotations[`${ANNOTATION_BASE}/external`]).toLowerCase() === "true";
|
||||
}
|
||||
});
|
||||
if (route.metadata.annotations[`${ANNOTATION_BASE}/pod-selector`] !== undefined) {
|
||||
constructedService.podSelector = route.metadata.annotations[`${ANNOTATION_BASE}/pod-selector`];
|
||||
}
|
||||
if (route.metadata.annotations[`${ANNOTATION_BASE}/ping`]) {
|
||||
constructedService.ping = route.metadata.annotations[`${ANNOTATION_BASE}/ping`];
|
||||
}
|
||||
if (route.metadata.annotations[`${ANNOTATION_BASE}/siteMonitor`]) {
|
||||
constructedService.siteMonitor = route.metadata.annotations[`${ANNOTATION_BASE}/siteMonitor`];
|
||||
}
|
||||
if (route.metadata.annotations[`${ANNOTATION_BASE}/statusStyle`]) {
|
||||
constructedService.statusStyle = route.metadata.annotations[`${ANNOTATION_BASE}/statusStyle`];
|
||||
}
|
||||
Object.keys(route.metadata.annotations).forEach((annotation) => {
|
||||
if (annotation.startsWith(ANNOTATION_WIDGET_BASE)) {
|
||||
shvl.set(
|
||||
constructedService,
|
||||
annotation.replace(`${ANNOTATION_BASE}/`, ""),
|
||||
route.metadata.annotations[annotation],
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
constructedService = JSON.parse(substituteEnvironmentVars(JSON.stringify(constructedService)));
|
||||
} catch (e) {
|
||||
logger.error("Error attempting k8s environment variable substitution.");
|
||||
logger.debug(e);
|
||||
}
|
||||
|
||||
return constructedService;
|
||||
});
|
||||
try {
|
||||
constructedService = JSON.parse(substituteEnvironmentVars(JSON.stringify(constructedService)));
|
||||
} catch (e) {
|
||||
logger.error("Error attempting k8s environment variable substitution.");
|
||||
logger.debug(e);
|
||||
}
|
||||
return constructedService;
|
||||
}),
|
||||
);
|
||||
|
||||
const mappedServiceGroups = [];
|
||||
|
||||
@@ -418,7 +332,7 @@ export function cleanServiceGroups(groups) {
|
||||
pointsLimit,
|
||||
diskUnits,
|
||||
|
||||
// glances, customapi, iframe
|
||||
// glances, customapi, iframe, prometheusmetric
|
||||
refreshInterval,
|
||||
|
||||
// hdhomerun
|
||||
@@ -461,6 +375,9 @@ export function cleanServiceGroups(groups) {
|
||||
// opnsense, pfsense
|
||||
wan,
|
||||
|
||||
// prometheusmetric
|
||||
metrics,
|
||||
|
||||
// proxmox
|
||||
node,
|
||||
|
||||
@@ -646,6 +563,10 @@ export function cleanServiceGroups(groups) {
|
||||
if (type === "vikunja") {
|
||||
if (enableTaskList !== undefined) cleanedService.widget.enableTaskList = !!enableTaskList;
|
||||
}
|
||||
if (type === "prometheusmetric") {
|
||||
if (metrics) cleanedService.widget.metrics = metrics;
|
||||
if (refreshInterval) cleanedService.widget.refreshInterval = refreshInterval;
|
||||
}
|
||||
}
|
||||
|
||||
return cleanedService;
|
||||
|
||||
0
src/utils/kubernetes/kubernetes-crd.js
Normal file
0
src/utils/kubernetes/kubernetes-crd.js
Normal file
211
src/utils/kubernetes/kubernetes-routes.js
Normal file
211
src/utils/kubernetes/kubernetes-routes.js
Normal file
@@ -0,0 +1,211 @@
|
||||
import { CustomObjectsApi, NetworkingV1Api, CoreV1Api, ApiextensionsV1Api } from "@kubernetes/client-node";
|
||||
|
||||
import getKubeArguments from "utils/config/kubernetes";
|
||||
import createLogger from "utils/logger";
|
||||
|
||||
const logger = createLogger("service-helpers");
|
||||
|
||||
const kubeArguments = getKubeArguments();
|
||||
const kc = kubeArguments.config;
|
||||
|
||||
const apiGroup = "gateway.networking.k8s.io";
|
||||
const version = "v1";
|
||||
|
||||
let crd;
|
||||
let core;
|
||||
let networking;
|
||||
let routingType;
|
||||
let traefik;
|
||||
|
||||
export async function checkCRD(name) {
|
||||
const apiExtensions = kc.makeApiClient(ApiextensionsV1Api);
|
||||
const exist = await apiExtensions
|
||||
.readCustomResourceDefinitionStatus(name)
|
||||
.then(() => true)
|
||||
.catch(async (error) => {
|
||||
if (error.statusCode === 403) {
|
||||
logger.error(
|
||||
"Error checking if CRD %s exists. Make sure to add the following permission to your RBAC: %d %s %s",
|
||||
name,
|
||||
error.statusCode,
|
||||
error.body.message,
|
||||
);
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
return exist;
|
||||
}
|
||||
|
||||
const getSchemaFromGateway = async (gatewayRef) => {
|
||||
const schema = await crd
|
||||
.getNamespacedCustomObject(apiGroup, version, gatewayRef.namespace, "gateways", gatewayRef.name)
|
||||
.then((response) => {
|
||||
const listner = response.body.spec.listeners.filter((listener) => listener.name === gatewayRef.sectionName)[0];
|
||||
return listner.protocol.toLowerCase();
|
||||
})
|
||||
.catch((error) => {
|
||||
logger.error("Error getting gateways: %d %s %s", error.statusCode, error.body, error.response);
|
||||
logger.debug(error);
|
||||
return "";
|
||||
});
|
||||
return schema;
|
||||
};
|
||||
|
||||
async function getUrlFromHttpRoute(ingress) {
|
||||
const urlHost = ingress.spec.hostnames[0];
|
||||
const urlPath = ingress.spec.rules[0].matches[0].path.value;
|
||||
const urlSchema = (await getSchemaFromGateway(ingress.spec.parentRefs[0])) ? "https" : "http";
|
||||
return `${urlSchema}://${urlHost}${urlPath}`;
|
||||
}
|
||||
|
||||
function getUrlFromIngress(ingress) {
|
||||
const urlHost = ingress.spec.rules[0].host;
|
||||
const urlPath = ingress.spec.rules[0].http.paths[0].path;
|
||||
const urlSchema = ingress.spec.tls ? "https" : "http";
|
||||
return `${urlSchema}://${urlHost}${urlPath}`;
|
||||
}
|
||||
|
||||
async function getHttpRouteList() {
|
||||
// httproutes
|
||||
const getHttpRoute = async (namespace) =>
|
||||
crd
|
||||
.listNamespacedCustomObject(apiGroup, version, namespace, "httproutes")
|
||||
.then((response) => {
|
||||
const [httpRoute] = response.body.items;
|
||||
return httpRoute;
|
||||
})
|
||||
.catch((error) => {
|
||||
logger.error("Error getting httproutes: %d %s %s", error.statusCode, error.body, error.response);
|
||||
logger.debug(error);
|
||||
return null;
|
||||
});
|
||||
|
||||
// namespaces
|
||||
const namespaces = await core
|
||||
.listNamespace()
|
||||
.then((response) => response.body.items.map((ns) => ns.metadata.name))
|
||||
.catch((error) => {
|
||||
logger.error("Error getting namespaces: %d %s %s", error.statusCode, error.body, error.response);
|
||||
logger.debug(error);
|
||||
return null;
|
||||
});
|
||||
|
||||
let httpRouteList = [];
|
||||
if (namespaces) {
|
||||
const httpRouteListUnfiltered = await Promise.all(
|
||||
namespaces.map(async (namespace) => {
|
||||
const httpRoute = await getHttpRoute(namespace);
|
||||
return httpRoute;
|
||||
}),
|
||||
);
|
||||
|
||||
httpRouteList = httpRouteListUnfiltered.filter((httpRoute) => httpRoute !== undefined);
|
||||
}
|
||||
return httpRouteList;
|
||||
}
|
||||
|
||||
async function getIngressList(ANNOTATION_BASE) {
|
||||
const ingressList = await networking
|
||||
.listIngressForAllNamespaces(null, null, null, null)
|
||||
.then((response) => response.body)
|
||||
.catch((error) => {
|
||||
logger.error("Error getting ingresses: %d %s %s", error.statusCode, error.body, error.response);
|
||||
logger.debug(error);
|
||||
return null;
|
||||
});
|
||||
|
||||
if (traefik) {
|
||||
const traefikContainoExists = await checkCRD("ingressroutes.traefik.containo.us");
|
||||
const traefikExists = await checkCRD("ingressroutes.traefik.io");
|
||||
|
||||
const traefikIngressListContaino = await crd
|
||||
.listClusterCustomObject("traefik.containo.us", "v1alpha1", "ingressroutes")
|
||||
.then((response) => response.body)
|
||||
.catch(async (error) => {
|
||||
if (traefikContainoExists) {
|
||||
logger.error(
|
||||
"Error getting traefik ingresses from traefik.containo.us: %d %s %s",
|
||||
error.statusCode,
|
||||
error.body,
|
||||
error.response,
|
||||
);
|
||||
logger.debug(error);
|
||||
}
|
||||
|
||||
return [];
|
||||
});
|
||||
|
||||
const traefikIngressListIo = await crd
|
||||
.listClusterCustomObject("traefik.io", "v1alpha1", "ingressroutes")
|
||||
.then((response) => response.body)
|
||||
.catch(async (error) => {
|
||||
if (traefikExists) {
|
||||
logger.error(
|
||||
"Error getting traefik ingresses from traefik.io: %d %s %s",
|
||||
error.statusCode,
|
||||
error.body,
|
||||
error.response,
|
||||
);
|
||||
logger.debug(error);
|
||||
}
|
||||
|
||||
return [];
|
||||
});
|
||||
|
||||
const traefikIngressList = [...(traefikIngressListContaino?.items ?? []), ...(traefikIngressListIo?.items ?? [])];
|
||||
|
||||
if (traefikIngressList.length > 0) {
|
||||
const traefikServices = traefikIngressList.filter(
|
||||
(ingress) => ingress.metadata.annotations && ingress.metadata.annotations[`${ANNOTATION_BASE}/href`],
|
||||
);
|
||||
ingressList.items.push(...traefikServices);
|
||||
}
|
||||
}
|
||||
|
||||
return ingressList.items;
|
||||
}
|
||||
|
||||
export async function getRouteList(ANNOTATION_BASE) {
|
||||
let routeList = [];
|
||||
|
||||
if (!kc) {
|
||||
return [];
|
||||
}
|
||||
|
||||
crd = kc.makeApiClient(CustomObjectsApi);
|
||||
core = kc.makeApiClient(CoreV1Api);
|
||||
networking = kc.makeApiClient(NetworkingV1Api);
|
||||
|
||||
routingType = kubeArguments.route;
|
||||
traefik = kubeArguments.traefik;
|
||||
|
||||
switch (routingType) {
|
||||
case "ingress":
|
||||
routeList = await getIngressList(ANNOTATION_BASE);
|
||||
break;
|
||||
case "gateway":
|
||||
routeList = await getHttpRouteList();
|
||||
break;
|
||||
default:
|
||||
routeList = await getIngressList(ANNOTATION_BASE);
|
||||
}
|
||||
|
||||
return routeList;
|
||||
}
|
||||
|
||||
export async function getUrlSchema(route) {
|
||||
let urlSchema;
|
||||
|
||||
switch (routingType) {
|
||||
case "ingress":
|
||||
urlSchema = getUrlFromIngress(route);
|
||||
break;
|
||||
case "gateway":
|
||||
urlSchema = await getUrlFromHttpRoute(route);
|
||||
break;
|
||||
default:
|
||||
urlSchema = getUrlFromIngress(route);
|
||||
}
|
||||
return urlSchema;
|
||||
}
|
||||
@@ -23,7 +23,7 @@ export default async function genericProxyHandler(req, res, map) {
|
||||
formatApiCall(widgets[widget.type].api, { endpoint, ...widget }).replace(/(?<=\?.*)\?/g, "&"),
|
||||
);
|
||||
|
||||
const headers = req.extraHeaders ?? widget.headers ?? {};
|
||||
const headers = req.extraHeaders ?? widget.headers ?? widgets[widget.type].headers ?? {};
|
||||
|
||||
if (widget.username && widget.password) {
|
||||
headers.Authorization = `Basic ${Buffer.from(`${widget.username}:${widget.password}`).toString("base64")}`;
|
||||
@@ -75,7 +75,13 @@ export default async function genericProxyHandler(req, res, map) {
|
||||
url.port ? `:${url.port}` : "",
|
||||
url.pathname,
|
||||
);
|
||||
return res.status(status).json({ error: { message: "HTTP Error", url: sanitizeErrorURL(url), resultData } });
|
||||
return res.status(status).json({
|
||||
error: {
|
||||
message: "HTTP Error",
|
||||
url: sanitizeErrorURL(url),
|
||||
resultData: Buffer.isBuffer(resultData) ? Buffer.from(resultData).toString() : resultData,
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
return res.status(status).send(resultData);
|
||||
|
||||
@@ -54,7 +54,7 @@ export default async function beszelProxyHandler(req, res) {
|
||||
if (!token) {
|
||||
[status, token] = await login(loginUrl, widget.username, widget.password, service);
|
||||
if (status !== 200) {
|
||||
logger.debug(`HTTTP ${status} logging into npm api: ${token}`);
|
||||
logger.debug(`HTTP ${status} logging into npm api: ${token}`);
|
||||
return res.status(status).send(token);
|
||||
}
|
||||
}
|
||||
@@ -68,12 +68,12 @@ export default async function beszelProxyHandler(req, res) {
|
||||
});
|
||||
|
||||
if (status === 403) {
|
||||
logger.debug(`HTTTP ${status} retrieving data from npm api, logging in and trying again.`);
|
||||
logger.debug(`HTTP ${status} retrieving data from npm api, logging in and trying again.`);
|
||||
cache.del(`${tokenCacheKey}.${service}`);
|
||||
[status, token] = await login(loginUrl, widget.username, widget.password, service);
|
||||
|
||||
if (status !== 200) {
|
||||
logger.debug(`HTTTP ${status} logging into npm api: ${data}`);
|
||||
logger.debug(`HTTP ${status} logging into npm api: ${data}`);
|
||||
return res.status(status).send(data);
|
||||
}
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ export default async function calendarProxyHandler(req, res) {
|
||||
if (contentType) res.setHeader("Content-Type", contentType);
|
||||
|
||||
if (status !== 200) {
|
||||
logger.debug(`HTTTP ${status} retrieving data from integration URL ${integration.url} : ${data}`);
|
||||
logger.debug(`HTTP ${status} retrieving data from integration URL ${integration.url} : ${data}`);
|
||||
return res.status(status).send(data);
|
||||
}
|
||||
|
||||
|
||||
@@ -95,6 +95,7 @@ const components = {
|
||||
plex: dynamic(() => import("./plex/component")),
|
||||
portainer: dynamic(() => import("./portainer/component")),
|
||||
prometheus: dynamic(() => import("./prometheus/component")),
|
||||
prometheusmetric: dynamic(() => import("./prometheusmetric/component")),
|
||||
prowlarr: dynamic(() => import("./prowlarr/component")),
|
||||
proxmox: dynamic(() => import("./proxmox/component")),
|
||||
pterodactyl: dynamic(() => import("./pterodactyl/component")),
|
||||
@@ -113,6 +114,7 @@ const components = {
|
||||
stocks: dynamic(() => import("./stocks/component")),
|
||||
strelaysrv: dynamic(() => import("./strelaysrv/component")),
|
||||
swagdashboard: dynamic(() => import("./swagdashboard/component")),
|
||||
suwayomi: dynamic(() => import("./suwayomi/component")),
|
||||
tailscale: dynamic(() => import("./tailscale/component")),
|
||||
tandoor: dynamic(() => import("./tandoor/component")),
|
||||
tautulli: dynamic(() => import("./tautulli/component")),
|
||||
|
||||
@@ -56,7 +56,7 @@ export default async function npmProxyHandler(req, res) {
|
||||
if (!token) {
|
||||
[status, token] = await login(loginUrl, widget.username, widget.password, service);
|
||||
if (status !== 200) {
|
||||
logger.debug(`HTTTP ${status} logging into npm api: ${token}`);
|
||||
logger.debug(`HTTP ${status} logging into npm api: ${token}`);
|
||||
return res.status(status).send(token);
|
||||
}
|
||||
}
|
||||
@@ -70,12 +70,12 @@ export default async function npmProxyHandler(req, res) {
|
||||
});
|
||||
|
||||
if (status === 403) {
|
||||
logger.debug(`HTTTP ${status} retrieving data from npm api, logging in and trying again.`);
|
||||
logger.debug(`HTTP ${status} retrieving data from npm api, logging in and trying again.`);
|
||||
cache.del(`${tokenCacheKey}.${service}`);
|
||||
[status, token] = await login(loginUrl, widget.username, widget.password, service);
|
||||
|
||||
if (status !== 200) {
|
||||
logger.debug(`HTTTP ${status} logging into npm api: ${data}`);
|
||||
logger.debug(`HTTP ${status} logging into npm api: ${data}`);
|
||||
return res.status(status).send(data);
|
||||
}
|
||||
|
||||
|
||||
@@ -138,7 +138,7 @@ export default async function omadaProxyHandler(req, res) {
|
||||
const sitesResponseData = JSON.parse(data);
|
||||
|
||||
if (status !== 200 || sitesResponseData.errorCode > 0) {
|
||||
logger.debug(`HTTTP ${status} getting sites list: ${sitesResponseData.msg}`);
|
||||
logger.debug(`HTTP ${status} getting sites list: ${sitesResponseData.msg}`);
|
||||
return res
|
||||
.status(status)
|
||||
.json({ error: { message: "Error getting sites list", url, data: sitesResponseData } });
|
||||
|
||||
115
src/widgets/prometheusmetric/component.jsx
Normal file
115
src/widgets/prometheusmetric/component.jsx
Normal file
@@ -0,0 +1,115 @@
|
||||
import { useTranslation } from "next-i18next";
|
||||
|
||||
import Container from "components/services/widget/container";
|
||||
import Block from "components/services/widget/block";
|
||||
import useWidgetAPI from "utils/proxy/use-widget-api";
|
||||
|
||||
function formatValue(t, metric, rawValue) {
|
||||
if (!rawValue) return "-";
|
||||
|
||||
let value = rawValue;
|
||||
|
||||
// Scale the value. Accepts either a number to multiply by or a string
|
||||
// like "12/345".
|
||||
const scale = metric?.format?.scale;
|
||||
if (typeof scale === "number") {
|
||||
value *= scale;
|
||||
} else if (typeof scale === "string" && scale.includes("/")) {
|
||||
const parts = scale.split("/");
|
||||
const numerator = parts[0] ? parseFloat(parts[0]) : 1;
|
||||
const denominator = parts[1] ? parseFloat(parts[1]) : 1;
|
||||
value = (value * numerator) / denominator;
|
||||
} else {
|
||||
value = parseFloat(value);
|
||||
}
|
||||
|
||||
// Format the value using a known type and optional options.
|
||||
switch (metric?.format?.type) {
|
||||
case "text":
|
||||
break;
|
||||
default:
|
||||
value = t(`common.${metric.format.type}`, { value, ...metric.format?.options });
|
||||
}
|
||||
|
||||
// Apply fixed prefix.
|
||||
const prefix = metric?.format?.prefix;
|
||||
if (prefix) {
|
||||
value = `${prefix}${value}`;
|
||||
}
|
||||
|
||||
// Apply fixed suffix.
|
||||
const suffix = metric?.format?.suffix;
|
||||
if (suffix) {
|
||||
value = `${value}${suffix}`;
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
export default function Component({ service }) {
|
||||
const { t } = useTranslation();
|
||||
|
||||
const { widget } = service;
|
||||
|
||||
const { metrics = [], refreshInterval = 10000 } = widget;
|
||||
|
||||
let prometheusmetricError;
|
||||
|
||||
const prometheusmetricData = new Map(
|
||||
metrics.slice(0, 4).map((metric) => {
|
||||
// disable the rule that hooks should not be called from a callback,
|
||||
// because we don't need a strong guarantee of hook execution order here.
|
||||
// eslint-disable-next-line react-hooks/rules-of-hooks
|
||||
const { data: resultData, error: resultError } = useWidgetAPI(widget, "query", {
|
||||
query: metric.query,
|
||||
refreshInterval: Math.max(1000, metric.refreshInterval ?? refreshInterval),
|
||||
});
|
||||
if (resultError) {
|
||||
prometheusmetricError = resultError;
|
||||
}
|
||||
return [metric.key ?? metric.label, resultData];
|
||||
}),
|
||||
);
|
||||
|
||||
if (prometheusmetricError) {
|
||||
return <Container service={service} error={prometheusmetricError} />;
|
||||
}
|
||||
|
||||
if (!prometheusmetricData) {
|
||||
return (
|
||||
<Container service={service}>
|
||||
{metrics.slice(0, 4).map((item) => (
|
||||
<Block label={item.label} key={item.label} />
|
||||
))}
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
|
||||
function getResultValue(data) {
|
||||
// Fetches the first metric result from the Prometheus query result data.
|
||||
// The first element in the result value is the timestamp which is ignored here.
|
||||
const resultType = data?.data?.resultType;
|
||||
const result = data?.data?.result;
|
||||
|
||||
switch (resultType) {
|
||||
case "vector":
|
||||
return result?.[0]?.value?.[1];
|
||||
case "scalar":
|
||||
return result?.[1];
|
||||
default:
|
||||
return "";
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<Container service={service}>
|
||||
{metrics.map((metric) => (
|
||||
<Block
|
||||
label={metric.label}
|
||||
key={metric.key ?? metric.label}
|
||||
value={formatValue(t, metric, getResultValue(prometheusmetricData.get(metric.key ?? metric.label)))}
|
||||
/>
|
||||
))}
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
16
src/widgets/prometheusmetric/widget.js
Normal file
16
src/widgets/prometheusmetric/widget.js
Normal file
@@ -0,0 +1,16 @@
|
||||
import genericProxyHandler from "utils/proxy/handlers/generic";
|
||||
|
||||
const widget = {
|
||||
api: "{url}/api/v1/{endpoint}",
|
||||
proxyHandler: genericProxyHandler,
|
||||
|
||||
mappings: {
|
||||
query: {
|
||||
method: "GET",
|
||||
endpoint: "query",
|
||||
params: ["query"],
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
export default widget;
|
||||
40
src/widgets/suwayomi/component.jsx
Normal file
40
src/widgets/suwayomi/component.jsx
Normal file
@@ -0,0 +1,40 @@
|
||||
import { useTranslation } from "next-i18next";
|
||||
|
||||
import Container from "components/services/widget/container";
|
||||
import Block from "components/services/widget/block";
|
||||
import useWidgetAPI from "utils/proxy/use-widget-api";
|
||||
|
||||
export default function Component({ service }) {
|
||||
const { t } = useTranslation();
|
||||
|
||||
const { widget } = service;
|
||||
|
||||
const { data: suwayomiData, error: suwayomiError } = useWidgetAPI(widget);
|
||||
|
||||
if (suwayomiError) {
|
||||
return <Container service={service} error={suwayomiError} />;
|
||||
}
|
||||
|
||||
if (!suwayomiData) {
|
||||
if (!widget.fields || widget.fields.length === 0) {
|
||||
widget.fields = ["download", "nondownload", "read", "unread"];
|
||||
} else if (widget.fields.length > 4) {
|
||||
widget.fields = widget.fields.slice(0, 4);
|
||||
}
|
||||
return (
|
||||
<Container service={service}>
|
||||
{widget.fields.map((field) => (
|
||||
<Block key={field} label={`suwayomi.${field}`} />
|
||||
))}
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<Container service={service}>
|
||||
{suwayomiData.map((data) => (
|
||||
<Block key={data.label} label={data.label} value={t("common.number", { value: data.count })} />
|
||||
))}
|
||||
</Container>
|
||||
);
|
||||
}
|
||||
175
src/widgets/suwayomi/proxy.js
Normal file
175
src/widgets/suwayomi/proxy.js
Normal file
@@ -0,0 +1,175 @@
|
||||
import { httpProxy } from "utils/proxy/http";
|
||||
import { formatApiCall } from "utils/proxy/api-helpers";
|
||||
import getServiceWidget from "utils/config/service-helpers";
|
||||
import createLogger from "utils/logger";
|
||||
import widgets from "widgets/widgets";
|
||||
|
||||
const proxyName = "suwayomiProxyHandler";
|
||||
const logger = createLogger(proxyName);
|
||||
|
||||
const countsToExtract = {
|
||||
download: {
|
||||
condition: (c) => c.isDownloaded,
|
||||
gqlCondition: "isDownloaded: true",
|
||||
},
|
||||
nondownload: {
|
||||
condition: (c) => !c.isDownloaded,
|
||||
gqlCondition: "isDownloaded: false",
|
||||
},
|
||||
read: {
|
||||
condition: (c) => c.isRead,
|
||||
gqlCondition: "isRead: true",
|
||||
},
|
||||
unread: {
|
||||
condition: (c) => !c.isRead,
|
||||
gqlCondition: "isRead: false",
|
||||
},
|
||||
downloadedread: {
|
||||
condition: (c) => c.isDownloaded && c.isRead,
|
||||
gqlCondition: "isDownloaded: true, isRead: true",
|
||||
},
|
||||
downloadedunread: {
|
||||
condition: (c) => c.isDownloaded && !c.isRead,
|
||||
gqlCondition: "isDownloaded: true, isRead: false",
|
||||
},
|
||||
nondownloadedread: {
|
||||
condition: (c) => !c.isDownloaded && c.isRead,
|
||||
gqlCondition: "isDownloaded: false, isRead: true",
|
||||
},
|
||||
nondownloadedunread: {
|
||||
condition: (c) => !c.isDownloaded && !c.isRead,
|
||||
gqlCondition: "isDownloaded: false, isRead: false",
|
||||
},
|
||||
};
|
||||
|
||||
function makeBody(fields, category = "all") {
|
||||
if (Number.isNaN(Number(category))) {
|
||||
let query = "";
|
||||
fields.forEach((field) => {
|
||||
query += `
|
||||
${field}: chapters(
|
||||
condition: {${countsToExtract[field].gqlCondition}}
|
||||
filter: {inLibrary: {equalTo: true}}
|
||||
) {
|
||||
totalCount
|
||||
}`;
|
||||
});
|
||||
return JSON.stringify({
|
||||
operationName: "Counts",
|
||||
query: `
|
||||
query Counts {
|
||||
${query}
|
||||
}`,
|
||||
});
|
||||
}
|
||||
|
||||
return JSON.stringify({
|
||||
operationName: "category",
|
||||
query: `
|
||||
query category($id: Int!) {
|
||||
category(id: $id) {
|
||||
# name
|
||||
mangas {
|
||||
nodes {
|
||||
chapters {
|
||||
nodes {
|
||||
isRead
|
||||
isDownloaded
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}`,
|
||||
variables: {
|
||||
id: Number(category),
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
function extractCounts(responseJSON, fields) {
|
||||
if (!("category" in responseJSON.data)) {
|
||||
return fields.map((field) => ({
|
||||
count: responseJSON.data[field].totalCount,
|
||||
label: `suwayomi.${field}`,
|
||||
}));
|
||||
}
|
||||
const tmp = responseJSON.data.category.mangas.nodes.reduce(
|
||||
(accumulator, manga) => {
|
||||
manga.chapters.nodes.forEach((chapter) => {
|
||||
fields.forEach((field, i) => {
|
||||
if (countsToExtract[field].condition(chapter)) {
|
||||
accumulator[i] += 1;
|
||||
}
|
||||
});
|
||||
});
|
||||
return accumulator;
|
||||
},
|
||||
[0, 0, 0, 0],
|
||||
);
|
||||
return fields.map((field, i) => ({
|
||||
count: tmp[i],
|
||||
label: `suwayomi.${field}`,
|
||||
}));
|
||||
}
|
||||
|
||||
export default async function suwayomiProxyHandler(req, res) {
|
||||
const { group, service, endpoint } = req.query;
|
||||
|
||||
if (!group || !service) {
|
||||
logger.debug("Invalid or missing service '%s' or group '%s'", service, group);
|
||||
return res.status(400).json({ error: "Invalid proxy service type" });
|
||||
}
|
||||
|
||||
const widget = await getServiceWidget(group, service);
|
||||
|
||||
if (!widget) {
|
||||
logger.debug("Invalid or missing widget for service '%s' in group '%s'", service, group);
|
||||
return res.status(400).json({ error: "Invalid proxy service type" });
|
||||
}
|
||||
|
||||
if (!widget.fields || widget.fields.length === 0) {
|
||||
widget.fields = ["download", "nondownload", "read", "unread"];
|
||||
} else if (widget.fields.length > 4) {
|
||||
widget.fields = widget.fields.slice(0, 4);
|
||||
}
|
||||
|
||||
const url = new URL(formatApiCall(widgets[widget.type].api, { endpoint, ...widget }));
|
||||
|
||||
const body = makeBody(widget.fields, widget.category);
|
||||
|
||||
const headers = {
|
||||
"Content-Type": "application/json",
|
||||
};
|
||||
|
||||
if (widget.username && widget.password) {
|
||||
headers.Authorization = `Basic ${Buffer.from(`${widget.username}:${widget.password}`).toString("base64")}`;
|
||||
}
|
||||
|
||||
const [status, contentType, data] = await httpProxy(url, {
|
||||
method: "POST",
|
||||
body,
|
||||
headers,
|
||||
});
|
||||
|
||||
if (status === 401) {
|
||||
logger.error("Invalid or missing username or password for service '%s' in group '%s'", service, group);
|
||||
return res.status(status).send({ error: { message: "401: unauthorized, username or password is incorrect." } });
|
||||
}
|
||||
|
||||
if (status !== 200) {
|
||||
logger.error(
|
||||
"Error getting data from Suwayomi for service '%s' in group '%s': %d. Data: %s",
|
||||
service,
|
||||
group,
|
||||
status,
|
||||
data,
|
||||
);
|
||||
return res.status(status).send({ error: { message: "Error getting data. body: %s, data: %s", body, data } });
|
||||
}
|
||||
|
||||
const returnData = extractCounts(JSON.parse(data), widget.fields);
|
||||
|
||||
if (contentType) res.setHeader("Content-Type", contentType);
|
||||
return res.status(status).send(returnData);
|
||||
}
|
||||
8
src/widgets/suwayomi/widget.js
Normal file
8
src/widgets/suwayomi/widget.js
Normal file
@@ -0,0 +1,8 @@
|
||||
import suwayomiProxyHandler from "./proxy";
|
||||
|
||||
const widget = {
|
||||
api: "{url}/api/graphql",
|
||||
proxyHandler: suwayomiProxyHandler,
|
||||
};
|
||||
|
||||
export default widget;
|
||||
@@ -205,7 +205,7 @@ export default function Component({ service }) {
|
||||
<div className="flex flex-col pb-1 mx-1">
|
||||
{playing.map((session) => (
|
||||
<SessionEntry
|
||||
key={session.Id}
|
||||
key={session.session_key}
|
||||
session={session}
|
||||
enableUser={enableUser}
|
||||
showEpisodeNumber={showEpisodeNumber}
|
||||
|
||||
@@ -87,6 +87,7 @@ import plantit from "./plantit/widget";
|
||||
import plex from "./plex/widget";
|
||||
import portainer from "./portainer/widget";
|
||||
import prometheus from "./prometheus/widget";
|
||||
import prometheusmetric from "./prometheusmetric/widget";
|
||||
import prowlarr from "./prowlarr/widget";
|
||||
import proxmox from "./proxmox/widget";
|
||||
import pterodactyl from "./pterodactyl/widget";
|
||||
@@ -104,6 +105,7 @@ import stash from "./stash/widget";
|
||||
import stocks from "./stocks/widget";
|
||||
import strelaysrv from "./strelaysrv/widget";
|
||||
import swagdashboard from "./swagdashboard/widget";
|
||||
import suwayomi from "./suwayomi/widget";
|
||||
import tailscale from "./tailscale/widget";
|
||||
import tandoor from "./tandoor/widget";
|
||||
import tautulli from "./tautulli/widget";
|
||||
@@ -218,6 +220,7 @@ const widgets = {
|
||||
plex,
|
||||
portainer,
|
||||
prometheus,
|
||||
prometheusmetric,
|
||||
prowlarr,
|
||||
proxmox,
|
||||
pterodactyl,
|
||||
@@ -236,6 +239,7 @@ const widgets = {
|
||||
stocks,
|
||||
strelaysrv,
|
||||
swagdashboard,
|
||||
suwayomi,
|
||||
tailscale,
|
||||
tandoor,
|
||||
tautulli,
|
||||
|
||||
Reference in New Issue
Block a user