253 Commits

Author SHA1 Message Date
2ab1892b6e Remved comments
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-26 13:58:16 +00:00
593317fd13 Parse logs of CRI containers
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-26 13:57:45 +00:00
4dfd89d78e Bump promtail
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-26 13:53:26 +00:00
e92853b736 Added nginx annotations to skooner ingress
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-26 12:39:02 +00:00
635246317f Merge branch 'master' of https://git.cluster.fun/AverageMarcus/cluster.fun
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-26 12:16:03 +00:00
2ea466ed83 Added VPA
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-26 11:56:24 +00:00
18f748f010 Add scrape to prom svc 2021-12-24 15:55:56 +00:00
7379a43178 Switched back to docker.cluster.fun
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-24 09:58:30 +00:00
9d1f2528c5 Switch harbor domain
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-24 09:26:32 +00:00
3ae4e1142f Upgraded to Kube 1.23
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-24 09:20:11 +00:00
e18f77caaa Bump matrix version
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-23 20:37:59 +00:00
5572056c9b Switch matrix chart to argo
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-23 19:20:22 +00:00
987eb5096c Bump cert-manager version
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-23 19:09:49 +00:00
211f7b7251 Migrate cert-manager chart to Argo
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-23 19:04:38 +00:00
513625074a Removed Tekton
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-23 18:47:00 +00:00
88f3132326 Set tailscale image to always pull
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-23 18:35:57 +00:00
00b51cd6a8 Set unique hostnames for tailscale
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-23 18:30:28 +00:00
786f724823 Added cronjob label to job template
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-19 18:03:37 +00:00
659771d4b9 Scape nginx metrics on service
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-19 17:08:53 +00:00
3baa5597fa Increase allowed memory for well-known
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-12-19 16:50:48 +00:00
04af487324 Remove geoip
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-28 13:37:17 +00:00
b9ed0a571e Added geoip to logs
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-28 13:31:39 +00:00
53f5a5c062 Enable nginx ingress metrics
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-28 11:38:35 +00:00
45d8fc0328 Added nginx plugin
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-28 11:07:05 +00:00
207376a89c Added nginx log parsing
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-28 08:51:51 +00:00
fd148bdd75 Correctly drop weave-net logs 2021-11-27 21:11:52 +00:00
c676fad20a Add more promtail filtering 2021-11-27 21:10:14 +00:00
769fdff851 Updated promtail config
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 21:02:05 +00:00
8bfcfbe770 Updated promtail config
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 20:16:30 +00:00
a49bb8e58e Added loki mapping with port
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 20:05:02 +00:00
b489562c57 Re-enable promtail
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 20:04:01 +00:00
513af4f9c5 Disable promtail
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 14:27:04 +00:00
8ce2c08c34 Updated promtail config
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 14:14:58 +00:00
796f891f17 Updated Loki and Prometheus config
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 13:45:40 +00:00
ad33387c26 Added skooner
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-27 11:22:06 +00:00
d6ad4bca2e Set bodysize on nextcloud ingress
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-18 08:18:02 +00:00
2515940ee4 Bumped outline
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-18 07:20:36 +00:00
0dc864eb63 Added podify to non-auth proxy config
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-13 10:14:48 +00:00
f027c5075b Updated proxies
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-13 09:27:13 +00:00
089aef13d3 Added readarr ingress
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-13 09:04:09 +00:00
c749096aa0 Updated cookie secret 2021-11-12 21:18:15 +00:00
fb542ff995 Updated dashboard oauth proxy 2021-11-12 21:04:25 +00:00
a14d7bf5bf Upgrade outline
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-12 07:46:01 +00:00
02ec582bd9 Updated proxy
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-08 06:29:28 +00:00
9277f202e9 Added reloader annotation
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-07 05:36:03 +00:00
bdc418e0d8 Updated proxy
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-07 05:30:13 +00:00
10d80e3452 Added non-auth proxy
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-11-04 19:26:57 +00:00
fa07f27433 Fixed project
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-24 11:37:54 +01:00
97c545d3e8 Fixed namespace
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-24 11:36:31 +01:00
e26dec2f7a Remove iptable drop
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-24 10:56:57 +01:00
22717250e5 Update weave-net with new pod CIDR
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-24 10:28:14 +01:00
f4f6745c27 Use tailscale for auth proxy
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-23 22:50:49 +01:00
f9caf0a0d1 Fix annotation
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 14:39:52 +01:00
c5359f2adc Max bodty size
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 14:37:33 +01:00
6450a24334 Removed invalid annotations
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 12:20:35 +01:00
1b8318df3e Update API versions
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 12:16:41 +01:00
4a9589aaeb Disable prometheus
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 12:14:46 +01:00
f516ee38ae Switched to nginx
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 12:07:23 +01:00
36d87d3c12 Update cert-manager
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 08:54:48 +01:00
86b9327767 Upgrade cluster
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 07:45:40 +01:00
0accc05333 Upgraded ingress resources
Signed-off-by: Marcus Noble <github@marcusnoble.co.uk>
2021-10-16 07:39:46 +01:00
c540580782 Added photos ingress 2021-09-30 18:41:27 +01:00
524cd8837b Removed workadventure 2021-09-30 18:41:06 +01:00
0b7b010a01 Added wallabag 2021-09-30 09:11:48 +01:00
38ed896839 Finished debugging Outline 2021-09-11 17:31:41 +01:00
c761d83549 Merge branch 'master' of https://git.cluster.fun/AverageMarcus/cluster.fun 2021-09-11 17:26:53 +01:00
f6a1a5cb2a Debug outline 2021-09-11 17:09:30 +01:00
993e515eb2 Merge branch 'master' of https://git.cluster.fun/averagemarcus/cluster.fun 2021-09-11 13:38:25 +01:00
0db4e321ea Added outline 2021-09-11 11:56:42 +01:00
4bc3a9add5 Added outline 2021-09-11 10:51:21 +01:00
912dac6479 Drop tweet-svg back to 2 replicas 2021-09-04 16:45:11 +01:00
3a946fabe1 Bump blog to 4 replicas 2021-09-04 16:22:34 +01:00
444546095f Bump tweetsvg replicas 2021-09-04 16:16:37 +01:00
b80cde1825 Bumped replicas 2021-09-02 11:20:37 +01:00
87e9074a0b Ignore image changes (from Tekton deployments) 2021-09-01 08:33:08 +01:00
79fa75c080 Updated imagepullpolicy 2021-09-01 05:46:30 +01:00
b2192bb6ce Removed old applications 2021-09-01 05:45:14 +01:00
f515ffd081 Removed notea 2021-08-27 06:41:01 +01:00
e9a9250165 Updated terraform 2021-08-27 06:08:59 +01:00
8cabb103f8 Added Notea 2021-08-26 12:49:10 +01:00
025e542a58 Added text-to-dxf 2021-07-27 05:53:55 +01:00
91c2018722 Increased memory limit for opengraph 2021-07-24 11:03:34 +01:00
ee2faf4401 Bump inlets 2021-07-12 06:00:29 +01:00
aa0d9786e2 Use own inlets image 2021-07-12 05:32:02 +01:00
722fd18e64 Added Tank 2021-07-07 11:33:50 +01:00
9d7f02dc0d Updatre harbor chart 2021-07-07 09:23:21 +01:00
da01b67104 Update harbor 2021-07-05 17:15:22 +01:00
9cdc5f2450 More improvements to traefik log collecting 2021-07-04 10:36:41 +01:00
2b5e2eeff0 First attempt at extracting access log fields to labels 2021-07-04 10:09:02 +01:00
7fa91de04f Added workadventure 2021-06-19 08:41:24 +01:00
fd5572cec8 Updated promtail config 2021-06-18 21:10:05 +01:00
bfaa7c30e5 Correctly fixed reloader 2021-06-18 19:17:27 +01:00
83781ae047 Updated promtail filters 2021-06-18 19:05:28 +01:00
c7be02c83d Fixed reloader 2021-06-18 18:58:30 +01:00
7a1df207a7 Merge branch 'master' of https://git.cluster.fun/AverageMarcus/cluster.fun 2021-06-18 18:50:52 +01:00
ea53700e02 Added filtering to promtail 2021-06-18 18:48:35 +01:00
6ce1fa075a Filter out healthz logs 2021-06-15 05:56:09 +01:00
88f91e20b6 Added probes to blog 2021-06-15 05:21:39 +01:00
4623e16600 Use correct secret for prom 2021-06-14 17:00:34 +01:00
b858dfcdfc Cleaned up loki-chart 2021-06-14 16:48:24 +01:00
9e7d07297b Fix creds namespace 2021-06-14 16:47:45 +01:00
cf8b042c98 Added authenticated ingres for prometheus 2021-06-14 15:47:46 +01:00
bc30ffa753 Change promtail labels to arg 2021-06-14 14:59:01 +01:00
85569644f2 Switched back to monitoring 2021-06-14 14:35:30 +01:00
d96095535e Removerted to using loki-stack 2021-06-14 12:40:16 +01:00
a6823b4871 Removed ndots 2021-06-14 12:16:50 +01:00
ba4858e88e Remove debugging 2021-06-14 11:24:35 +01:00
5df02c1f87 Set grafana dnspolicy to clusterfirst 2021-06-14 11:14:47 +01:00
680d50120d Enable debug logging on grafana 2021-06-14 10:56:55 +01:00
8ba1bb72de Set prom to recreate 2021-06-14 10:34:51 +01:00
6a2e61911d Added ndots 2021-06-14 10:32:17 +01:00
9baf2ead15 Added multi-cluster monitoring 2021-06-14 10:10:19 +01:00
59477f604a Remove loki-chart 2021-06-14 10:07:31 +01:00
1850295742 Moved service to inlets namespace 2021-06-13 19:42:45 +01:00
4e0680eb57 Added local prometheus to grafana 2021-06-13 19:36:34 +01:00
34fa21e5a9 added local prometheus svc 2021-06-13 19:32:28 +01:00
5ad34267ae Added podify 2021-06-09 17:40:59 +01:00
9a00be7aff Added better memory limits 2021-05-21 11:56:07 +01:00
a5c92eacef Added CV 2021-05-21 11:48:23 +01:00
015a0669be Remove pv 2021-05-20 08:43:45 +01:00
8aa2c7e83e Added second git pv 2021-05-20 08:41:21 +01:00
f6a6bfe2cf Upgrade cluster to 1.21.1 2021-05-20 08:38:41 +01:00
1323ff91e6 Update to latest nextcloud chart 2021-05-18 22:20:42 +01:00
b85da32ab5 Bump nextcloud to 21 2021-05-18 22:13:35 +01:00
e95357bf42 Bump nextcloud to 20 2021-05-18 22:07:17 +01:00
fc7d09a293 Drop nextcloud down to 1 replica 2021-05-18 22:01:52 +01:00
f154b89b54 Bump nextcloud chart 2021-05-18 21:51:46 +01:00
25fb87ef60 Bump synapse 2021-05-17 05:48:48 +01:00
45cc1d73a7 Update element 2021-05-17 05:40:10 +01:00
8710723ce0 Merge branch 'master' of https://git.cluster.fun/AverageMarcus/cluster.fun 2021-05-17 05:31:29 +01:00
d3ccc88c20 Harbor replicas and anit-affinity 2021-05-16 13:51:07 +01:00
7d9b9c1b1f Harbor replicas and anit-affinity 2021-05-16 13:00:50 +01:00
2427fe07ba Upgrade kubernetes to 1.21 2021-05-15 15:22:33 +01:00
1f044b5ae3 Removed outline 2021-05-12 14:12:28 +01:00
8b5982af70 Switch to using outline from dockerhub 2021-05-12 14:02:51 +01:00
f389e0b715 Removed notea 2021-05-12 13:30:08 +01:00
e8c380dd94 Added notea 2021-05-12 13:19:53 +01:00
74b19f2746 Added back adguard ingress 2021-05-12 12:11:20 +01:00
225b7d8cff Remove adguard ingress 2021-05-11 19:45:20 +01:00
bff4242b57 Use correct docker image name 2021-05-11 05:46:12 +01:00
4b1d859778 Fix copy mistake 2021-05-11 05:43:42 +01:00
b59327939e Merge branch 'master' of https://git.cluster.fun/AverageMarcus/cluster.fun 2021-05-11 05:40:04 +01:00
d760a69e29 Added opengraph-image-gen 2021-05-11 05:32:41 +01:00
071a73118c Add Adguard ingress 2021-05-10 11:32:58 +00:00
7dcdabd564 Remove buzzers 2021-05-10 09:00:08 +00:00
3cdebb541b Added TLS to inlets ingress 2021-05-09 11:17:29 +01:00
bbb9aba394 Updated inlets 2021-05-09 11:05:00 +01:00
d5e07e29d8 Removed grocy 2021-05-05 14:17:21 +01:00
a9c9813870 Updated grocy 2021-05-05 13:59:00 +01:00
ffa751ad7f Added barcode-buddy 2021-05-05 13:35:50 +01:00
b739031468 Longer startup delay 2021-05-05 12:27:48 +01:00
3bef89a27d Disable startup probe 2021-05-05 12:24:18 +01:00
964a653710 Create namespace 2021-05-05 11:54:59 +01:00
3a2661106b Replace grocy with argo helm chert 2021-05-05 11:43:06 +01:00
eb7a82f74e Added https to grocy 2021-05-05 11:37:53 +01:00
b9ffeaf626 Added grocy 2021-05-05 11:32:48 +01:00
acdc684e62 Dropped replicas back to 1 2021-05-05 09:50:42 +01:00
eddfbf4fb7 Bump inlets replicas 2021-05-05 08:37:09 +01:00
f67d067cf5 Updated inlets image 2021-05-05 08:35:05 +01:00
39ac57b5cb Removed CCTV 2021-05-03 08:15:32 +01:00
caa7a68e6f Fix service 2021-05-01 18:51:32 +01:00
04608e0cec Added auth to dashboard 2021-05-01 18:00:31 +01:00
2aa1628ebc Added reloader 2021-05-01 17:34:07 +01:00
a1c447ff73 Bump version of nextcloud 2021-04-10 15:37:48 +01:00
a81423ab42 Add redis to Nextcloud 2021-04-10 09:23:19 +01:00
ee1a18f169 Switch back to auth proxy 2021-04-07 10:21:25 +01:00
6693266ba5 Remove auth from photos 2021-04-06 18:50:35 +01:00
91f2fb943c Enabled automated sync 2021-04-05 10:31:04 +01:00
6dea278487 Updated analytics dashboard json 2021-04-05 10:29:53 +01:00
785e22050d Migrated remaining apps to Argo 2021-04-05 10:27:21 +01:00
99eb03aa5f Added inlet for photos 2021-04-05 08:16:14 +01:00
1ecc6bf920 Added ArgoCD proxy 2021-04-04 18:51:59 +01:00
0295ca8349 Added autosync 2021-04-03 11:48:29 +01:00
41fab7f1d4 Added harbor chart 2021-04-03 11:39:12 +01:00
5b3d1a0fee Autoscyn 2021-04-03 11:18:12 +01:00
404cdb0349 Comment out sync policy 2021-04-03 11:13:19 +01:00
a757e95b3d Fix typo 2021-04-03 11:11:06 +01:00
28d06d68d3 Removed namespace 2021-04-03 11:08:05 +01:00
7f23b96ebc Added cert chart 2021-04-03 11:07:10 +01:00
cfef345f93 Added more apps 2021-04-03 10:59:38 +01:00
b360920537 Added more apps 2021-04-03 10:26:31 +01:00
4ac30f8242 Added more apps 2021-04-03 10:20:57 +01:00
f036a70542 Added more apps 2021-04-03 10:15:05 +01:00
d39cb1320b Enable autosync 2021-04-03 10:13:36 +01:00
da143dce0f Added auto-proxy 2021-04-03 10:01:48 +01:00
1f54d2706a Added auto sync 2021-04-03 10:00:30 +01:00
9f91c5ef35 Fix ignore 2021-04-03 09:58:15 +01:00
468fd9f6a6 Ignore secret value changes 2021-04-03 09:48:00 +01:00
5b69611fed Auto create namespace 2021-04-03 09:26:52 +01:00
cc38ef42e0 Update anniversary 2021-04-03 09:23:45 +01:00
1665ef1e67 Begin argo refactor 2021-04-03 09:16:09 +01:00
bbc369afb4 Removed photoprism 2021-04-01 15:27:28 +01:00
422ee13940 Added feed-fetcher 2021-04-01 15:26:40 +01:00
a7e0b2a913 Added ingress 2021-04-01 15:26:32 +01:00
4ebe0bde06 Merge branch 'master' of https://git.cluster.fun/AverageMarcus/cluster.fun 2021-02-27 15:07:49 +00:00
030386cc6a Replaced terraform with kubectl calls 2021-02-27 15:07:41 +00:00
d1e34ddba0 Disabled auto-upgrade 2021-02-27 15:07:28 +00:00
1161564118 Updated nextcloud chart 2021-02-27 15:06:59 +00:00
6acdf29d1a Updated analytics dashboard 2021-02-23 08:35:42 +00:00
77d23f395a Added tweetsvg 2021-02-18 20:31:36 +00:00
9de410bb6e Terraform upgrade 2021-02-10 10:26:47 +00:00
b7c90557df Upgrade to Kubenetes 1.20 2021-02-05 21:42:52 +00:00
2cf5ce0ace Removed Linx 2021-02-04 21:14:47 +00:00
21c16256c7 Bumped harbor version 2021-02-04 21:14:30 +00:00
d6fb80ded4 Update analytics dashboard 2021-02-01 15:08:47 +00:00
0c334e0827 Update matrix 2021-01-30 07:28:42 +00:00
94b62b4c75 Update loki and grafana 2021-01-29 22:34:40 +00:00
06b4f07c21 Added VS Code 2020-12-18 08:55:30 +00:00
cef5f2ddc1 Always pull git-sync image 2020-12-09 14:03:19 +00:00
825447b712 Added git-sync 2020-12-09 12:47:35 +00:00
5c06e4c8d7 Added svg-to-dxf 2020-12-09 12:47:28 +00:00
34a00954db Increased photoprism storage 2020-11-29 16:19:58 +00:00
54af3af2c1 Added photoprism 2020-11-28 23:39:48 +00:00
7405481b72 Remove old pvc 2020-11-28 23:38:50 +00:00
fa51de4fb6 Updates 2020-11-07 13:29:37 +00:00
d29c9ec82c Added new RSS app 2020-10-19 06:05:48 +01:00
5f8800f311 Reverted w-2-r 2020-10-15 14:40:59 +01:00
eef0a6c22d Bump inlets version 2020-10-14 11:05:40 +01:00
d9d71a5dc7 Removed JQ 2020-10-14 10:02:52 +01:00
ff99e577cd Added JQ 2020-10-14 09:40:45 +01:00
f26d02ca7f Added base64 project 2020-10-13 17:36:48 +01:00
94e18c12ea Use a single auth proxy 2020-10-10 16:46:27 +01:00
84a9c19d93 Added anniversary 2020-10-03 13:20:33 +01:00
8f85a65cbe Added VPN check app 2020-09-22 22:35:40 +01:00
22ae249a1f Added download tunnels 2020-09-20 21:18:17 +01:00
50f86cc39f Reduce logging 2020-09-18 22:29:12 +01:00
295bb89828 Switch Loki to storing in bucket 2020-09-18 20:12:11 +01:00
3ab7377253 Added TIL 2020-09-14 18:49:45 +01:00
7d2c192b95 Improved multi-arch builds 2020-09-11 21:24:58 +01:00
a7a29c0201 Added multi-arch support 2020-09-11 20:56:46 +01:00
c40c5b5a33 Fixed probe 2020-09-09 09:42:50 +01:00
588348ac31 Added liveness probe to stringer 2020-09-09 09:11:54 +01:00
05e04afeff Added Go playground 2020-09-01 15:30:27 +01:00
cf2a889e4d Removed SCP archives 2020-08-25 15:25:01 +01:00
b838af199d Added scp-archives 2020-08-24 05:58:09 +01:00
9f65bf256a Added bucket for storing SCP archives 2020-08-23 11:16:02 +01:00
f5a7bb5abb Bump nextcloud version 2020-08-13 21:03:28 +01:00
5567ba142a Bumped versions 2020-08-13 20:47:20 +01:00
43aa708e09 Updated gitea 2020-07-15 10:18:10 +01:00
52339ccbed Update nodered 2020-07-15 10:16:46 +01:00
b08f0892be Bump version of riot 2020-07-10 20:02:58 +01:00
b60c244b8b Update 'manifests/matrix_chart.yaml' 2020-07-10 16:19:41 +00:00
fd26f7b3de Updated paradoxfox 2020-07-08 11:50:11 +01:00
e00db9e633 Added Paradoxfox.space 2020-07-04 19:42:10 +01:00
b35b34bb7a Added outline 2020-06-27 17:47:35 +01:00
85bd64e87e Remove bookstack 2020-06-27 17:47:17 +01:00
a80346f8e7 Added bookstack 2020-06-21 15:11:28 +01:00
53d8bd48bf Added bucket for octoprint 2020-06-20 14:54:47 +01:00
9c8f29e346 Added printer auth endpoint 2020-06-16 20:38:06 +01:00
ad3fab4cfd Removed pyload 2020-06-16 20:34:17 +01:00
cf0015d1e2 Added service for rpc 2020-06-16 20:34:09 +01:00
6ce5744672 Added missing resource types to kube-janitor 2020-06-10 12:24:40 +01:00
3d47bc34da Added home assistant tunnel 2020-06-08 18:35:15 +01:00
144 changed files with 5991 additions and 5493 deletions

View File

@@ -1,13 +1,8 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: dashboard
---
apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: docker-config name: docker-config
namespace: dashboard namespace: anniversary
annotations: annotations:
kube-1password: i6ngbk5zf4k52xgwdwnfup5bby kube-1password: i6ngbk5zf4k52xgwdwnfup5bby
kube-1password/vault: Kubernetes kube-1password/vault: Kubernetes
@@ -19,8 +14,8 @@ data:
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: dashboard name: anniversary
namespace: dashboard namespace: anniversary
spec: spec:
type: ClusterIP type: ClusterIP
ports: ports:
@@ -28,58 +23,59 @@ spec:
targetPort: web targetPort: web
name: web name: web
selector: selector:
app: dashboard app: anniversary
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: dashboard name: anniversary
namespace: dashboard namespace: anniversary
spec: spec:
replicas: 1 replicas: 1
selector: selector:
matchLabels: matchLabels:
app: dashboard app: anniversary
template: template:
metadata: metadata:
labels: labels:
app: dashboard app: anniversary
spec: spec:
imagePullSecrets: imagePullSecrets:
- name: docker-config - name: docker-config
containers: containers:
- name: web - name: web
image: docker.cluster.fun/private/dashboard:latest image: docker.cluster.fun/private/11-year-anniversary:latest
imagePullPolicy: Always imagePullPolicy: Always
ports: ports:
- containerPort: 80 - containerPort: 80
name: web name: web
resources: resources:
limits: limits:
memory: 50Mi memory: 5Mi
requests: requests:
memory: 50Mi memory: 5Mi
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: dashboard name: anniversary
namespace: dashboard namespace: anniversary
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
tls: tls:
- hosts: - hosts:
- dash.cluster.fun - 11-year-anniversary.marcusnoble.co.uk
secretName: dashboard-ingress secretName: anniversary-ingress
rules: rules:
- host: dash.cluster.fun - host: 11-year-anniversary.marcusnoble.co.uk
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: dashboard service:
servicePort: 80 name: anniversary
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: anniversary
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: anniversary
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: anniversary
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: anniversary
name: cluster-fun (scaleway)
source:
path: manifests/11-year-anniversary
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: auth-proxy
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: auth-proxy
name: cluster-fun (scaleway)
source:
path: manifests/auth-proxy
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: base64
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: base64
name: cluster-fun (scaleway)
source:
path: manifests/base64
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: blackhole
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: kube-system
name: cluster-fun (scaleway)
source:
path: manifests/blackhole
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

28
manifests/_apps/blog.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: blog
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: blog
name: cluster-fun (scaleway)
source:
path: manifests/blog
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,51 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: cert-manager
name: cluster-fun (scaleway)
source:
path: manifests/certmanager_chart
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager-cert-manager
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: cert-manager
name: cluster-fun (scaleway)
source:
repoURL: 'https://charts.jetstack.io'
targetRevision: 1.6.1
chart: cert-manager
helm:
version: v3
values: |-
installCRDs: "true"
resources:
requests:
memory: 32Mi
limits:
memory: 64Mi
syncPolicy:
automated: {}

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cors-proxy
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: cors-proxy
name: cluster-fun (scaleway)
source:
path: manifests/cors-proxy
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

28
manifests/_apps/cv.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cv
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: cv
name: cluster-fun (scaleway)
source:
path: manifests/cv
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dashboard
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: dashboard
name: cluster-fun (scaleway)
source:
path: manifests/dashboard
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: feed-fetcher
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: feed-fetcher
name: cluster-fun (scaleway)
source:
path: manifests/feed-fetcher
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: git-sync
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: git-sync
name: cluster-fun (scaleway)
source:
path: manifests/git-sync
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: gitea
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: gitea
name: cluster-fun (scaleway)
source:
path: manifests/gitea
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: goplayground
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: goplayground
name: cluster-fun (scaleway)
source:
path: manifests/goplayground
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: harbor
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: harbor
name: cluster-fun (scaleway)
source:
path: manifests/harbor_chart
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-janitor
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: kube-janitor
name: cluster-fun (scaleway)
source:
path: manifests/kube-janitor
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,177 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: matrix
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: chat
name: cluster-fun (scaleway)
source:
path: manifests/matrix_chart
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
syncOptions:
- CreateNamespace=true
automated: {}
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: chat-matrix
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: chat
name: cluster-fun (scaleway)
source:
repoURL: 'https://dacruz21.github.io/helm-charts'
targetRevision: 2.7.0
chart: matrix
helm:
version: v3
values: |-
matrix:
serverName: "matrix.cluster.fun"
telemetry: false
hostname: "matrix.cluster.fun"
presence: "true"
blockNonAdminInvites: false
enableSearch: "true"
adminEmail: "matrix@marcusnoble.co.uk"
uploads:
maxSize: 500M
maxPixels: 64M
federation:
enabled: false
allowPublicRooms: false
blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
registration:
enabled: false
allowGuests: false
urlPreviews:
enabled: true
rules:
maxSize: 10M
ip:
blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
volumes:
media:
capacity: 4Gi
signingKey:
capacity: 1Gi
postgresql:
enabled: true
persistence:
size: 4Gi
synapse:
image:
repository: "matrixdotorg/synapse"
tag: v1.43.0
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
replicaCount: 1
resources: {}
riot:
enabled: true
integrations:
enabled: true
ui: "https://scalar.vector.im/"
api: "https://scalar.vector.im/api"
widgets:
- "https://scalar.vector.im/_matrix/integrations/v1"
- "https://scalar.vector.im/api"
- "https://scalar-staging.vector.im/_matrix/integrations/v1"
- "https://scalar-staging.vector.im/api"
- "https://scalar-staging.riot.im/scalar/api"
# Experimental features in riot-web, see https://github.com/vector-im/riot-web/blob/develop/docs/labs.md
labs:
- feature_pinning
- feature_custom_status
- feature_state_counters
- feature_many_integration_managers
- feature_mjolnir
- feature_dm_verification
- feature_bridge_state
- feature_presence_in_room_list
- feature_custom_themes
- feature_new_spinner
# Servers to show in the Explore menu (the current server is always shown)
roomDirectoryServers: []
# Prefix before permalinks generated when users share links to rooms, users, or messages. If running an unfederated Synapse, set the below to the URL of your Riot instance.
permalinkPrefix: "https://chat.cluster.fun"
image:
repository: "vectorim/element-web"
tag: v1.9.8
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
replicaCount: 2
resources: {}
# Settings for Coturn TURN relay, used for routing voice calls
coturn:
enabled: false
mail:
enabled: false
relay:
enabled: false
bridges:
irc:
enabled: false
whatsapp:
enabled: false
discord:
enabled: false
networkPolicies:
enabled: false
ingress:
enabled: false
syncPolicy:
automated: {}
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: monitoring
name: cluster-fun (scaleway)
source:
path: manifests/monitoring
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nextcloud
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: nextcloud
name: cluster-fun (scaleway)
source:
path: manifests/nextcloud_chart
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
syncOptions:
- CreateNamespace=true
automated: {}
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nginx-lb
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: kube-system
name: cluster-fun (scaleway)
source:
path: manifests/nginx-lb
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nodered
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: node-red
name: cluster-fun (scaleway)
source:
path: manifests/nodered
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: opengraph
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: opengraph
name: cluster-fun (scaleway)
source:
path: manifests/opengraph
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: outline
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: outline
name: cluster-fun (scaleway)
source:
path: manifests/outline
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: paradoxfox
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: paradoxfox
name: cluster-fun (scaleway)
source:
path: manifests/paradoxfox
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

28
manifests/_apps/qr.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: qr
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: qr
name: cluster-fun (scaleway)
source:
path: manifests/qr
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: reloader
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: kube-system
name: cluster-fun (scaleway)
source:
repoURL: 'https://stakater.github.io/stakater-charts'
targetRevision: v0.0.89
chart: reloader
syncPolicy:
automated: {}
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

28
manifests/_apps/rss.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rss
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: rss
name: cluster-fun (scaleway)
source:
path: manifests/rss
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: skooner
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: skooner
name: cluster-fun (scaleway)
source:
path: manifests/skooner
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: svg-to-dxf
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: svg-to-dxf
name: cluster-fun (scaleway)
source:
path: manifests/svg-to-dxf
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: talks
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: talks
name: cluster-fun (scaleway)
source:
path: manifests/talks
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

28
manifests/_apps/tank.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tank
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: tank
name: cluster-fun (scaleway)
source:
path: manifests/tank
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: text-to-dxf
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: text-to-dxf
name: cluster-fun (scaleway)
source:
path: manifests/text-to-dxf
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

28
manifests/_apps/til.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: til
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: til
name: cluster-fun (scaleway)
source:
path: manifests/til
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tweetsvg
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: tweetsvg
name: cluster-fun (scaleway)
source:
path: manifests/tweetsvg
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

View File

@@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: twitter-profile-pic
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: twitter-profile-pic
name: cluster-fun (scaleway)
source:
path: manifests/twitter-profile-pic
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
- group: apps
kind: Deployment
jqPathExpressions:
- .spec.template.spec.containers[]?.image

27
manifests/_apps/vpa.yaml Normal file
View File

@@ -0,0 +1,27 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vpa
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: kube-system
name: cluster-fun (scaleway)
source:
repoURL: 'https://charts.fairwinds.com/stable'
targetRevision: 0.5.0
chart: vpa
helm:
version: v3
values: |-
recommender:
extraArgs:
prometheus-address: "http://prometheus-server.monitoring.svc:80"
storage: prometheus
admissionController:
enabled: true
syncPolicy:
automated: {}

View File

@@ -0,0 +1,91 @@
# apiVersion: argoproj.io/v1alpha1
# kind: Application
# metadata:
# name: wallabag
# namespace: argocd
# finalizers:
# - resources-finalizer.argocd.argoproj.io
# spec:
# project: cluster.fun
# destination:
# namespace: wallabag
# name: cluster-fun (scaleway)
# source:
# path: manifests/wallabag
# repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
# targetRevision: HEAD
# syncPolicy:
# syncOptions:
# - CreateNamespace=true
# automated: {}
# ignoreDifferences:
# - kind: Secret
# jsonPointers:
# - /data
# ---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wallabag-chart
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: wallabag
name: cluster-fun (scaleway)
source:
repoURL: 'https://k8s-at-home.com/charts/'
targetRevision: 4.1.1
chart: wallabag
helm:
version: v3
values: |-
env:
TZ: UTC
MYSQL_ROOT_PASSWORD: wallabag-rootpass
SYMFONY__ENV__DOMAIN_NAME: https://wallabag.cluster.fun
SYMFONY__ENV__FOSUSER_REGISTRATION: false
SYMFONY__ENV__DATABASE_DRIVER: pdo_mysql
SYMFONY__ENV__DATABASE_DRIVER_CLASS: ~
SYMFONY__ENV__DATABASE_HOST: wallabag-chart-mariadb.wallabag.svc
SYMFONY__ENV__DATABASE_PORT: 3306
SYMFONY__ENV__DATABASE_NAME: wallabag
SYMFONY__ENV__DATABASE_USER: wallabag
SYMFONY__ENV__DATABASE_PASSWORD: wallabag-pass
SYMFONY__ENV__DATABASE_PATH: ~
SYMFONY__ENV__DATABASE_TABLE_PREFIX: wallabag_
SYMFONY__ENV__DATABASE_SOCKET: ~
SYMFONY__ENV__DATABASE_CHARSET: utf8mb4
SYMFONY__ENV__LOCALE: en
ingress:
main:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
tls:
- hosts:
- wallabag.cluster.fun
secretName: wallabag-ingress
hosts:
- host: wallabag.cluster.fun
paths:
- path: /
pathType: ImplementationSpecific
mariadb:
enabled: true
architecture: standalone
auth:
database: wallabag
username: wallabag
password: wallabag-pass
rootPassword: wallabag-rootpass
primary:
persistence:
enabled: true
redis:
enabled: false
syncPolicy:
automated: {}

View File

@@ -0,0 +1,18 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: weave-net
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: cluster.fun
destination:
namespace: kube-system
name: cluster-fun (scaleway)
source:
path: manifests/weave-net
repoURL: "https://git.cluster.fun/AverageMarcus/cluster.fun.git"
targetRevision: HEAD
syncPolicy:
automated: {}

View File

@@ -0,0 +1,132 @@
apiVersion: v1
kind: Namespace
metadata:
name: auth-proxy
---
apiVersion: v1
kind: Secret
metadata:
name: auth-proxy
namespace: auth-proxy
annotations:
kube-1password: mr6spkkx7n3memkbute6ojaarm
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: tailscale-auth
namespace: auth-proxy
annotations:
kube-1password: 2cqycmsgv5r7vcyvjpblcl2l4y
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-proxy
namespace: auth-proxy
labels:
app: auth-proxy
spec:
replicas: 1
selector:
matchLabels:
app: auth-proxy
template:
metadata:
labels:
app: auth-proxy
spec:
dnsPolicy: None
dnsConfig:
nameservers:
- 100.100.100.100
containers:
- name: oauth-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:v7.2.0
args:
- --cookie-secure=false
- --provider=oidc
- --provider-display-name=Auth0
- --upstream=http://talos.averagemarcus.github.beta.tailscale.net
- --http-address=0.0.0.0:8080
- --email-domain=*
- --pass-basic-auth=false
- --pass-access-token=false
- --oidc-issuer-url=https://marcusnoble.eu.auth0.com/
- --cookie-secret=KDGD6rrK6cBmryyZ4wcJ9xAUNW9AQNFT
- --cookie-expire=336h0m0s
env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: username
name: auth-proxy
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: password
name: auth-proxy
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 50Mi
requests:
memory: 50Mi
- name: tailscale
image: ghcr.io/tailscale/tailscale:latest
imagePullPolicy: Always
env:
- name: AUTH_KEY
valueFrom:
secretKeyRef:
name: tailscale-auth
key: password
securityContext:
capabilities:
add:
- NET_ADMIN
command:
- sh
- -c
- |
export PATH=$PATH:/tailscale/bin
if [[ ! -d /dev/net ]]; then mkdir -p /dev/net; fi
if [[ ! -c /dev/net/tun ]]; then mknod /dev/net/tun c 10 200; fi
echo "Starting tailscaled"
tailscaled --socket=/tmp/tailscaled.sock &
PID=$!
echo "Running tailscale up"
tailscale --socket=/tmp/tailscaled.sock up \
--accept-dns=true \
--authkey=${AUTH_KEY} \
--hostname=auth-proxy-oauth2
echo "Re-enabling incoming traffic from the cluster"
wait ${PID}
---
apiVersion: v1
kind: Service
metadata:
name: auth-proxy
namespace: auth-proxy
labels:
app: auth-proxy
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: auth-proxy
type: ClusterIP

View File

@@ -0,0 +1,124 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auth-proxy
namespace: auth-proxy
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- downloads.cluster.fun
- argo.cluster.fun
- code.cluster.fun
- jackett.cluster.fun
- printer.cluster.fun
- radarr.cluster.fun
- readarr.cluster.fun
- sonarr.cluster.fun
- transmission.cluster.fun
- tekton.cluster.fun
secretName: auth-proxy-ingress
rules:
- host: downloads.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: argo.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: code.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: jackett.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: printer.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: radarr.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: readarr.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: sonarr.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: transmission.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http
- host: tekton.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
name: http

View File

@@ -0,0 +1,278 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: host-mappings
namespace: auth-proxy
labels:
app: proxy
data:
mapping.json: |
{
"tekton-el.auth-proxy.svc": "tekton-el.cluster.local",
"home.auth-proxy.svc": "home.cluster.local",
"home.cluster.fun": "home.cluster.local",
"prometheus.auth-proxy.svc": "prometheus.cluster.local",
"loki.auth-proxy.svc": "loki.cluster.local",
"loki.auth-proxy.svc:80": "loki.cluster.local"
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-proxy
namespace: auth-proxy
labels:
app: internal-proxy
annotations:
configmap.reloader.stakater.com/reload: "host-mappings"
spec:
replicas: 1
selector:
matchLabels:
app: internal-proxy
template:
metadata:
labels:
app: internal-proxy
spec:
dnsPolicy: None
dnsConfig:
nameservers:
- 100.100.100.100
containers:
- name: proxy
image: docker.cluster.fun/averagemarcus/proxy:latest
imagePullPolicy: Always
env:
- name: PROXY_DESTINATION
value: talos.averagemarcus.github.beta.tailscale.net
- name: PORT
value: "8080"
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: host-mappings
mountPath: /config/
- name: tailscale
image: ghcr.io/tailscale/tailscale:latest
imagePullPolicy: Always
env:
- name: AUTH_KEY
valueFrom:
secretKeyRef:
name: tailscale-auth
key: password
securityContext:
capabilities:
add:
- NET_ADMIN
command:
- sh
- -c
- |
export PATH=$PATH:/tailscale/bin
if [[ ! -d /dev/net ]]; then mkdir -p /dev/net; fi
if [[ ! -c /dev/net/tun ]]; then mknod /dev/net/tun c 10 200; fi
echo "Starting tailscaled"
tailscaled --socket=/tmp/tailscaled.sock &
PID=$!
echo "Running tailscale up"
tailscale --socket=/tmp/tailscaled.sock up \
--accept-dns=true \
--authkey=${AUTH_KEY} \
--hostname=auth-proxy-internal-proxy
wait ${PID}
volumes:
- name: host-mappings
configMap:
name: host-mappings
---
apiVersion: v1
kind: Service
metadata:
name: tekton-el
namespace: auth-proxy
labels:
app: internal-proxy
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: internal-proxy
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: loki
namespace: auth-proxy
labels:
app: internal-proxy
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: internal-proxy
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: auth-proxy
labels:
app: internal-proxy
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: internal-proxy
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: non-auth-proxy
namespace: auth-proxy
labels:
app: non-auth-proxy
spec:
replicas: 1
selector:
matchLabels:
app: non-auth-proxy
template:
metadata:
labels:
app: non-auth-proxy
spec:
dnsPolicy: None
dnsConfig:
nameservers:
- 100.100.100.100
containers:
- name: oauth-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:v7.2.0
args:
- --cookie-secure=false
- --provider=oidc
- --provider-display-name=Auth0
- --upstream=http://talos.averagemarcus.github.beta.tailscale.net
- --http-address=0.0.0.0:8080
- --email-domain=*
- --pass-basic-auth=false
- --pass-access-token=false
- --oidc-issuer-url=https://marcusnoble.eu.auth0.com/
- --cookie-secret=KDGD6rrK6cBmryyZ4wcJ9xAUNW9AQNFT
- --cookie-expire=336h0m0s
- --trusted-ip=0.0.0.0/0
env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: username
name: auth-proxy
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: password
name: auth-proxy
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 50Mi
requests:
memory: 50Mi
- name: tailscale
image: ghcr.io/tailscale/tailscale:latest
imagePullPolicy: Always
env:
- name: AUTH_KEY
valueFrom:
secretKeyRef:
name: tailscale-auth
key: password
securityContext:
capabilities:
add:
- NET_ADMIN
command:
- sh
- -c
- |
export PATH=$PATH:/tailscale/bin
if [[ ! -d /dev/net ]]; then mkdir -p /dev/net; fi
if [[ ! -c /dev/net/tun ]]; then mknod /dev/net/tun c 10 200; fi
echo "Starting tailscaled"
tailscaled --socket=/tmp/tailscaled.sock &
PID=$!
echo "Running tailscale up"
tailscale --socket=/tmp/tailscaled.sock up \
--accept-dns=true \
--authkey=${AUTH_KEY} \
--hostname=non-auth-proxy
echo "Re-enabling incoming traffic from the cluster"
wait ${PID}
---
apiVersion: v1
kind: Service
metadata:
name: non-auth-proxy
namespace: auth-proxy
labels:
app: non-auth-proxy
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: non-auth-proxy
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: non-auth-proxy
namespace: auth-proxy
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- home.cluster.fun
secretName: non-auth-proxy-ingress
rules:
- host: home.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: non-auth-proxy
port:
name: http

View File

@@ -0,0 +1,69 @@
apiVersion: v1
kind: Service
metadata:
name: base64
namespace: base64
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: base64
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: base64
namespace: base64
spec:
replicas: 1
selector:
matchLabels:
app: base64
template:
metadata:
labels:
app: base64
spec:
imagePullSecrets:
- name: docker-config
containers:
- name: web
image: docker.cluster.fun/averagemarcus/base64:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
resources:
limits:
memory: 5Mi
requests:
memory: 5Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: base64
namespace: base64
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- base64.cluster.fun
secretName: base64-ingress
rules:
- host: base64.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: base64
port:
number: 80

11
manifests/base64/vpa.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: base64
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: base64
updatePolicy:
updateMode: "Auto"

View File

@@ -37,12 +37,11 @@ spec:
resources: resources:
limits: limits:
memory: 10Mi memory: 10Mi
requests: requests:
memory: 10Mi memory: 10Mi
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: black-hole name: black-hole
@@ -52,6 +51,9 @@ spec:
- http: - http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: black-hole service:
servicePort: 80 name: black-hole
port:
number: 80

View File

@@ -1,9 +1,4 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: blog
---
apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: blog name: blog
@@ -23,7 +18,7 @@ metadata:
name: blog name: blog
namespace: blog namespace: blog
spec: spec:
replicas: 2 replicas: 4
selector: selector:
matchLabels: matchLabels:
app: blog app: blog
@@ -44,18 +39,27 @@ spec:
memory: 200Mi memory: 200Mi
requests: requests:
memory: 200Mi memory: 200Mi
livenessProbe:
httpGet:
path: /healthz
port: web
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: web
initialDelaySeconds: 10
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: blog name: blog
namespace: blog namespace: blog
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- marcusnoble.co.uk - marcusnoble.co.uk
@@ -65,22 +69,24 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: blog service:
servicePort: 80 name: blog
port:
number: 80
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: blog-www name: blog-www
namespace: blog namespace: blog
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- www.marcusnoble.co.uk - www.marcusnoble.co.uk
@@ -90,22 +96,24 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: blog service:
servicePort: 80 name: blog
port:
number: 80
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: blog-blog name: blog-blog
namespace: blog namespace: blog
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- blog.marcusnoble.co.uk - blog.marcusnoble.co.uk
@@ -115,7 +123,10 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: blog service:
servicePort: 80 name: blog
port:
number: 80

11
manifests/blog/vpa.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: blog
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: blog
updatePolicy:
updateMode: "Auto"

View File

@@ -1,70 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: buzzers
---
apiVersion: v1
kind: Service
metadata:
name: buzzers
namespace: buzzers
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: buzzers
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: buzzers
namespace: buzzers
spec:
replicas: 1
selector:
matchLabels:
app: buzzers
template:
metadata:
labels:
app: buzzers
spec:
containers:
- name: web
image: docker.cluster.fun/averagemarcus/buzzers:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
resources:
limits:
memory: 283Mi
requests:
memory: 283Mi
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: buzzers
namespace: buzzers
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- buzzers.cluster.fun
secretName: buzzers-ingress
rules:
- host: buzzers.cluster.fun
http:
paths:
- path: /
backend:
serviceName: buzzers
servicePort: 80

View File

@@ -1,114 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: cctv
---
apiVersion: v1
kind: Secret
metadata:
name: cctv-auth
namespace: cctv
annotations:
kube-1password: mr6spkkx7n3memkbute6ojaarm
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cctv-auth
namespace: cctv
labels:
app: cctv-auth
spec:
replicas: 1
selector:
matchLabels:
app: cctv-auth
template:
metadata:
labels:
app: cctv-auth
spec:
containers:
- args:
- --cookie-secure=false
- --provider=oidc
- --provider-display-name=Auth0
- --upstream=http://inlets.inlets.svc.cluster.local
- --http-address=$(HOST_IP):8080
- --redirect-url=https://cctv.cluster.fun/oauth2/callback
- --email-domain=*
- --pass-basic-auth=false
- --pass-access-token=false
- --oidc-issuer-url=https://marcusnoble.eu.auth0.com/
- --cookie-secret=KDGD6rrK6cBmryyZ4wcJ9xAUNW9AQN
env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: username
name: cctv-auth
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: password
name: cctv-auth
image: quay.io/oauth2-proxy/oauth2-proxy:v5.1.1
name: oauth-proxy
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 50Mi
requests:
memory: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: cctv-auth
namespace: cctv
labels:
app: cctv-auth
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: cctv-auth
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cctv-auth
namespace: cctv
labels:
app: cctv-auth
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- cctv.cluster.fun
secretName: cctv-ingress
rules:
- host: cctv.cluster.fun
http:
paths:
- path: /
backend:
serviceName: cctv-auth
servicePort: 80

View File

@@ -1,47 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
labels:
certmanager.k8s.io/disable-validation: "true"
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: cert-manager
namespace: cert-manager
spec:
chart:
repository: https://charts.jetstack.io
name: cert-manager
version: v0.15.0
maxHistory: 5
values:
installCRDs: "true"
resources:
requests:
memory: 32Mi
limits:
memory: 64Mi
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: letsencrypt@marcusnoble.co.uk
privateKeySecretRef:
name: letsencrypt
solvers:
- selector: {}
http01:
ingress:
class: traefik

View File

@@ -0,0 +1,23 @@
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
labels:
certmanager.k8s.io/disable-validation: "true"
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: letsencrypt@marcusnoble.co.uk
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx

View File

@@ -1,9 +1,4 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: cors-proxy
---
apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: cors-proxy name: cors-proxy
@@ -40,17 +35,16 @@ spec:
- containerPort: 8000 - containerPort: 8000
name: web name: web
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: cors-proxy name: cors-proxy
namespace: cors-proxy namespace: cors-proxy
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- cors-proxy.cluster.fun - cors-proxy.cluster.fun
@@ -60,22 +54,24 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: cors-proxy service:
servicePort: 80 name: cors-proxy
port:
number: 80
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: cors-proxy-mn name: cors-proxy-mn
namespace: cors-proxy namespace: cors-proxy
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- cors-proxy.marcusnoble.co.uk - cors-proxy.marcusnoble.co.uk
@@ -85,6 +81,9 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: cors-proxy service:
servicePort: 80 name: cors-proxy
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: cors-proxy
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: cors-proxy
updatePolicy:
updateMode: "Auto"

82
manifests/cv/cv.yaml Normal file
View File

@@ -0,0 +1,82 @@
apiVersion: v1
kind: Secret
metadata:
name: docker-config
namespace: cv
annotations:
kube-1password: i6ngbk5zf4k52xgwdwnfup5bby
kube-1password/vault: Kubernetes
kube-1password/secret-text-key: .dockerconfigjson
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: e30=
---
apiVersion: v1
kind: Service
metadata:
name: cv
namespace: cv
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: cv
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cv
namespace: cv
spec:
replicas: 1
selector:
matchLabels:
app: cv
template:
metadata:
labels:
app: cv
spec:
imagePullSecrets:
- name: docker-config
containers:
- name: web
image: docker.cluster.fun/private/cv:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
resources:
limits:
memory: 10Mi
requests:
memory: 10Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cv
namespace: cv
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- cv.marcusnoble.co.uk
secretName: cv-ingress
rules:
- host: cv.marcusnoble.co.uk
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: cv
port:
number: 80

11
manifests/cv/vpa.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: cv
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: cv
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,131 @@
apiVersion: v1
kind: Secret
metadata:
name: docker-config
namespace: dashboard
annotations:
kube-1password: i6ngbk5zf4k52xgwdwnfup5bby
kube-1password/vault: Kubernetes
kube-1password/secret-text-key: .dockerconfigjson
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: e30=
---
apiVersion: v1
kind: Secret
metadata:
name: dashboard-auth
namespace: dashboard
annotations:
kube-1password: mr6spkkx7n3memkbute6ojaarm
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
namespace: dashboard
spec:
type: ClusterIP
ports:
- port: 80
targetPort: auth
name: web
selector:
app: dashboard
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dashboard
namespace: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
imagePullSecrets:
- name: docker-config
containers:
- args:
- --cookie-secure=false
- --provider=oidc
- --provider-display-name=Auth0
- --upstream=http://localhost:80
- --http-address=$(HOST_IP):8000
- --redirect-url=https://dash.cluster.fun/oauth2/callback
- --email-domain=marcusnoble.co.uk
- --pass-basic-auth=false
- --pass-access-token=false
- --oidc-issuer-url=https://marcusnoble.eu.auth0.com/
- --cookie-secret=KDGD6rrK6cBmryyZ4wcJ9xAUNW9AQNFT
env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: username
name: dashboard-auth
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: password
name: dashboard-auth
image: quay.io/oauth2-proxy/oauth2-proxy:v7.2.0
name: oauth-proxy
ports:
- containerPort: 8000
protocol: TCP
name: auth
resources:
limits:
memory: 50Mi
requests:
memory: 50Mi
- name: web
image: docker.cluster.fun/private/dashboard:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
resources:
limits:
memory: 50Mi
requests:
memory: 50Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: dashboard
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dash.cluster.fun
secretName: dashboard-ingress
rules:
- host: dash.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: dashboard
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: dashboard
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: dashboard
updatePolicy:
updateMode: "Auto"

View File

@@ -1,115 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: downloads
---
apiVersion: v1
kind: Secret
metadata:
name: downloads-auth
namespace: downloads
annotations:
kube-1password: mr6spkkx7n3memkbute6ojaarm
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: downloads-auth
namespace: downloads
labels:
app: downloads-auth
spec:
replicas: 1
selector:
matchLabels:
app: downloads-auth
template:
metadata:
labels:
app: downloads-auth
spec:
containers:
- args:
- --cookie-secure=false
- --provider=oidc
- --provider-display-name=Auth0
- --upstream=http://inlets.inlets.svc.cluster.local
- --http-address=$(HOST_IP):8080
- --redirect-url=https://downloads.cluster.fun/oauth2/callback
- --email-domain=*
- --pass-basic-auth=false
- --pass-access-token=false
- --oidc-issuer-url=https://marcusnoble.eu.auth0.com/
- --cookie-secret=KDGD6rrK6cBmryyZ4wcJ9xAUNW9AQN
env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: username
name: downloads-auth
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: password
name: downloads-auth
image: quay.io/oauth2-proxy/oauth2-proxy:v5.1.1
name: oauth-proxy
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 250Mi
requests:
memory: 250Mi
---
apiVersion: v1
kind: Service
metadata:
name: downloads-auth
namespace: downloads
labels:
app: downloads-auth
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: downloads-auth
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: downloads-auth
namespace: downloads
labels:
app: downloads-auth
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- downloads.cluster.fun
secretName: downloads-ingress
rules:
- host: downloads.cluster.fun
http:
paths:
- path: /
backend:
serviceName: downloads-auth
servicePort: 80

View File

@@ -0,0 +1,63 @@
apiVersion: v1
kind: Service
metadata:
name: feed-fetcher
namespace: feed-fetcher
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: feed-fetcher
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: feed-fetcher
namespace: feed-fetcher
spec:
replicas: 2
selector:
matchLabels:
app: feed-fetcher
template:
metadata:
labels:
app: feed-fetcher
spec:
containers:
- name: web
image: docker.cluster.fun/averagemarcus/feed-fetcher:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: web
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: feed-fetcher
namespace: feed-fetcher
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- feed-fetcher.cluster.fun
secretName: feed-fetcher-ingress
rules:
- host: feed-fetcher.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: feed-fetcher
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: feed-fetcher
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: feed-fetcher
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,92 @@
apiVersion: v1
kind: Secret
metadata:
name: git-sync-github
namespace: git-sync
annotations:
kube-1password: cfo2ufhgem57clbscxetxgevue
kube-1password/vault: Kubernetes
kube-1password/password-key: token
type: Opaque
data:
---
apiVersion: v1
kind: Secret
metadata:
name: git-sync-gitea
namespace: git-sync
annotations:
kube-1password: b7kpdlcvt7y63bozu3i4j4lojm
kube-1password/vault: Kubernetes
kube-1password/password-key: token
type: Opaque
data:
---
apiVersion: v1
kind: Secret
metadata:
name: git-sync-gitlab
namespace: git-sync
annotations:
kube-1password: t47v3xdgadiifgoi4wmqibrlty
kube-1password/vault: Kubernetes
kube-1password/password-key: token
type: Opaque
data:
---
apiVersion: v1
kind: Secret
metadata:
name: git-sync-bitbucket
namespace: git-sync
annotations:
kube-1password: adrki45krr2tq34sug7dhdk5iy
kube-1password/vault: Kubernetes
kube-1password/password-key: token
type: Opaque
data:
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: git-sync
namespace: git-sync
spec:
schedule: "0 */1 * * *"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 1
jobTemplate:
metadata:
labels:
cronjob: git-sync
spec:
backoffLimit: 1
template:
spec:
containers:
- name: sync
image: docker.cluster.fun/averagemarcus/git-sync:latest
imagePullPolicy: Always
env:
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
name: git-sync-github
key: token
- name: GITEA_TOKEN
valueFrom:
secretKeyRef:
name: git-sync-gitea
key: token
- name: GITLAB_TOKEN
valueFrom:
secretKeyRef:
name: git-sync-gitlab
key: token
- name: BITBUCKET_TOKEN
valueFrom:
secretKeyRef:
name: git-sync-bitbucket
key: token
restartPolicy: Never

View File

@@ -1,9 +1,4 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: gitea
---
apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: gitea-secret-key name: gitea-secret-key
@@ -47,7 +42,7 @@ spec:
spec: spec:
containers: containers:
- name: git - name: git
image: gitea/gitea:1.11 image: gitea/gitea:1.12.3
env: env:
- name: APP_NAME - name: APP_NAME
value: "Git" value: "Git"
@@ -94,17 +89,16 @@ spec:
requests: requests:
storage: 20Gi storage: 20Gi
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: git name: git
namespace: gitea namespace: gitea
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- git.cluster.fun - git.cluster.fun
@@ -114,6 +108,9 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: git service:
servicePort: 80 name: git
port:
number: 80

View File

@@ -0,0 +1,68 @@
apiVersion: v1
kind: Service
metadata:
name: goplayground
namespace: goplayground
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: goplayground
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goplayground
namespace: goplayground
spec:
replicas: 1
selector:
matchLabels:
app: goplayground
template:
metadata:
labels:
app: goplayground
spec:
containers:
- name: web
image: x1unix/go-playground:1.6.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: web
resources:
limits:
memory: 20Mi
requests:
memory: 20Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: goplayground
namespace: goplayground
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- go.cluster.fun
secretName: goplayground-ingress
rules:
- host: go.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: goplayground
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: goplayground
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: goplayground
updatePolicy:
updateMode: "Auto"

View File

@@ -1,57 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: harbor
---
apiVersion: v1
kind: Secret
metadata:
name: harbor-values
namespace: harbor
annotations:
kube-1password: igey7vjjiqmj25v64eck7cyj34
kube-1password/vault: Kubernetes
kube-1password/secret-text-key: values.yaml
type: Opaque
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: harbor
namespace: harbor
spec:
chart:
repository: https://helm.goharbor.io
name: harbor
version: 1.3.2
maxHistory: 4
skipCRDs: false
valuesFrom:
- secretKeyRef:
name: harbor-values
namespace: harbor
key: values.yaml
optional: false
values:
portal:
resources:
requests:
memory: 64Mi
core:
resources:
requests:
memory: 64Mi
jobservice:
resources:
requests:
memory: 64Mi
registry:
registry:
resources:
requests:
memory: 64Mi
controller:
resources:
requests:
memory: 64Mi

View File

@@ -0,0 +1,133 @@
apiVersion: v1
kind: Namespace
metadata:
name: harbor
---
apiVersion: v1
kind: Secret
metadata:
name: harbor-values
namespace: harbor
annotations:
kube-1password: igey7vjjiqmj25v64eck7cyj34
kube-1password/vault: Kubernetes
kube-1password/secret-text-key: values.yaml
type: Opaque
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: harbor
namespace: harbor
spec:
chart:
repository: https://helm.goharbor.io
name: harbor
version: 1.7.0
maxHistory: 4
skipCRDs: false
valuesFrom:
- secretKeyRef:
name: harbor-values
namespace: harbor
key: values.yaml
optional: false
values:
fullnameOverride: harbor-harbor-harbor
externalURL: https://docker.cluster.fun
updateStrategy:
type: Recreate
expose:
type: ingress
tls:
enabled: true
certSource: secret
secret:
secretName: harbor-harbor-ingress
ingress:
hosts:
core: docker.cluster.fun
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
portal:
replicas: 2
priorityClassName: system-cluster-critical
resources:
requests:
memory: 64Mi
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- portal
- key: app
operator: In
values:
- harbor
topologyKey: kubernetes.io/hostname
core:
replicas: 2
priorityClassName: system-cluster-critical
resources:
requests:
memory: 64Mi
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- core
- key: app
operator: In
values:
- harbor
topologyKey: kubernetes.io/hostname
jobservice:
replicas: 1
resources:
requests:
memory: 64Mi
jobLoggers:
- stdout
registry:
replicas: 2
priorityClassName: system-cluster-critical
registry:
resources:
requests:
memory: 64Mi
controller:
resources:
requests:
memory: 64Mi
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- registry
- key: app
operator: In
values:
- harbor
topologyKey: kubernetes.io/hostname
chartmuseum:
enabled: false
notary:
enabled: false
trivy:
enabled: false
metrics:
enabled: true

View File

@@ -1,103 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: inlets
---
apiVersion: v1
kind: Secret
metadata:
name: inlets
namespace: inlets
annotations:
kube-1password: podju6t2s2osc3vbkimyce25ti
kube-1password/vault: Kubernetes
kube-1password/password-key: token
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: inlets
namespace: inlets
labels:
app: inlets
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
app: inlets
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: inlets
namespace: inlets
labels:
app: inlets
spec:
replicas: 1
selector:
matchLabels:
app: inlets
template:
metadata:
labels:
app: inlets
spec:
containers:
- name: inlets
image: inlets/inlets:2.7.0
imagePullPolicy: Always
command: ["inlets"]
args:
- "server"
- "--token-from=/var/inlets/token"
volumeMounts:
- name: inlets-token-volume
mountPath: /var/inlets/
volumes:
- name: inlets-token-volume
secret:
secretName: inlets
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: inlets
namespace: inlets
spec:
rules:
- host: inlets.cluster.fun
http:
paths:
- path: /
backend:
serviceName: inlets
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pyload
namespace: inlets
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- pyload.cluster.fun
secretName: pyload-ingress
rules:
- host: pyload.cluster.fun
http:
paths:
- path: /
backend:
serviceName: inlets
servicePort: 80

View File

@@ -1,9 +1,4 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: kube-janitor
---
apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
name: kube-janitor name: kube-janitor
@@ -69,6 +64,8 @@ metadata:
version: v20.4.1 version: v20.4.1
name: kube-janitor name: kube-janitor
namespace: kube-janitor namespace: kube-janitor
annotations:
configmap.reloader.stakater.com/reload: "kube-janitor"
spec: spec:
replicas: 1 replicas: 1
selector: selector:
@@ -88,7 +85,7 @@ spec:
- --interval=15 - --interval=15
- --rules-file=/config/rules.yaml - --rules-file=/config/rules.yaml
- --include-namespaces=tekton-pipelines - --include-namespaces=tekton-pipelines
- --include-resources=pods - --include-resources=pods,pipelineruns,taskruns
resources: resources:
limits: limits:
memory: 100Mi memory: 100Mi

View File

@@ -1,114 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: linx-server
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linx-server
namespace: linx-server
data:
linx-server.conf: |-
sitename = share
maxsize = 524288000
maxexpiry = 0
selifpath = f
nologs = false
force-random-filename = false
s3-endpoint = https://s3.fr-par.scw.cloud
s3-region = fr-par
s3-bucket = cluster.fun-linx
---
apiVersion: v1
kind: Secret
metadata:
name: linx-server-s3
namespace: linx-server
annotations:
kube-1password: d5dgclm3qrxd4fntivv26ec3ee
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: linx-server
namespace: linx-server
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: linx-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: linx-server
namespace: linx-server
spec:
replicas: 2
selector:
matchLabels:
app: linx-server
template:
metadata:
labels:
app: linx-server
spec:
containers:
- name: web
image: andreimarcu/linx-server:version-2.3.5
imagePullPolicy: Always
args:
- -config
- /config/linx-server.conf
ports:
- containerPort: 8080
name: web
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: linx-server-s3
key: username
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: linx-server-s3
key: password
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: linx-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: linx-server
namespace: linx-server
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- share.cluster.fun
secretName: linx-server-ingress
rules:
- host: share.cluster.fun
http:
paths:
- path: /
backend:
serviceName: linx-server
servicePort: 80

View File

@@ -1,175 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: Secret
metadata:
name: grafana-credentials
namespace: logging
annotations:
kube-1password: wpynfxkdipeeacyfxkvtdsuj54
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: loki
namespace: logging
spec:
chart:
repository: https://grafana.github.io/loki/charts
name: loki-stack
version: 0.36.2
maxHistory: 4
skipCRDs: false
values:
fluent-bit:
enabled: "true"
promtail:
enabled: "true"
loki:
persistence:
enabled: "true"
size: 10Gi
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: grafana
namespace: logging
spec:
chart:
repository: https://kubernetes-charts.storage.googleapis.com
name: grafana
version: 5.0.22
maxHistory: 4
skipCRDs: false
values:
image:
tag: 7.0.0
admin:
existingSecret: "grafana-credentials"
userKey: username
passwordKey: password
persistence:
enabled: "false"
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Loki
type: loki
url: http://logging-loki.logging:3100
access: proxy
jsonData:
maxLines: 1000
---
apiVersion: v1
kind: Secret
metadata:
name: grafana-auth
namespace: logging
annotations:
kube-1password: mr6spkkx7n3memkbute6ojaarm
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-auth
namespace: logging
labels:
app: grafana-auth
spec:
replicas: 1
selector:
matchLabels:
app: grafana-auth
template:
metadata:
labels:
app: grafana-auth
spec:
containers:
- args:
- --cookie-secure=false
- --provider=oidc
- --provider-display-name=Auth0
- --upstream=http://logging-grafana.logging.svc.cluster.local
- --http-address=$(HOST_IP):8080
- --redirect-url=https://grafana.cluster.fun/oauth2/callback
- --email-domain=marcusnoble.co.uk
- --pass-basic-auth=false
- --pass-access-token=false
- --oidc-issuer-url=https://marcusnoble.eu.auth0.com/
- --cookie-secret=KDGD6rrK6cBmryyZ4wcJ9xAUNW9AQN
env:
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: username
name: grafana-auth
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: password
name: grafana-auth
image: quay.io/oauth2-proxy/oauth2-proxy:v5.1.1
name: oauth-proxy
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: grafana-auth
namespace: logging
labels:
app: grafana-auth
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: grafana-auth
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-auth
namespace: logging
labels:
app: grafana-auth
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- grafana.cluster.fun
secretName: grafana-ingress
rules:
- host: grafana.cluster.fun
http:
paths:
- path: /
backend:
serviceName: grafana-auth
servicePort: 80

View File

@@ -1,255 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: chat
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: matrix
namespace: chat
spec:
chart:
repository: https://dacruz21.github.io/helm-charts
name: matrix
version: 1.1.2
maxHistory: 4
values:
matrix:
serverName: "matrix.cluster.fun"
telemetry: false
hostname: "matrix.cluster.fun"
presence: true
blockNonAdminInvites: false
search: true
adminEmail: "matrix@marcusnoble.co.uk"
uploads:
maxSize: 100M
maxPixels: 32M
federation:
enabled: false
allowPublicRooms: false
blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
registration:
enabled: false
allowGuests: false
urlPreviews:
enabled: true
rules:
maxSize: 4M
ip:
blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
volumes:
media:
capacity: 4Gi
signingKey:
capacity: 1Gi
postgresql:
enabled: true
persistence:
size: 4Gi
synapse:
image:
repository: "matrixdotorg/synapse"
tag: v1.12.4
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
replicaCount: 1
resources: {}
riot:
enabled: true
integrations:
enabled: true
ui: "https://scalar.vector.im/"
api: "https://scalar.vector.im/api"
widgets:
- "https://scalar.vector.im/_matrix/integrations/v1"
- "https://scalar.vector.im/api"
- "https://scalar-staging.vector.im/_matrix/integrations/v1"
- "https://scalar-staging.vector.im/api"
- "https://scalar-staging.riot.im/scalar/api"
# Experimental features in riot-web, see https://github.com/vector-im/riot-web/blob/develop/docs/labs.md
labs:
- feature_pinning
- feature_custom_status
- feature_state_counters
- feature_many_integration_managers
- feature_mjolnir
- feature_dm_verification
- feature_bridge_state
- feature_presence_in_room_list
- feature_custom_themes
# Servers to show in the Explore menu (the current server is always shown)
roomDirectoryServers: []
# Prefix before permalinks generated when users share links to rooms, users, or messages. If running an unfederated Synapse, set the below to the URL of your Riot instance.
permalinkPrefix: "https://chat.cluster.fun"
image:
repository: "vectorim/riot-web"
tag: v1.6.0
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
replicaCount: 1
resources: {}
# Settings for Coturn TURN relay, used for routing voice calls
coturn:
enabled: false
mail:
enabled: false
relay:
enabled: false
bridges:
irc:
enabled: false
whatsapp:
enabled: false
discord:
enabled: false
networkPolicies:
enabled: false
ingress:
enabled: false
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: matrix
namespace: chat
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- matrix.cluster.fun
secretName: matrix-ingress
rules:
- host: matrix.cluster.fun
http:
paths:
- path: /.well-known/matrix
backend:
serviceName: well-known
servicePort: 80
- path: /
backend:
serviceName: chat-matrix-synapse
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: riot
namespace: chat
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- chat.cluster.fun
secretName: riot-ingress
rules:
- host: chat.cluster.fun
http:
paths:
- path: /
backend:
serviceName: chat-matrix-riot
servicePort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: well-known
namespace: chat
spec:
replicas: 1
selector:
matchLabels:
app: well-known
template:
metadata:
labels:
app: well-known
spec:
containers:
- name: web
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: well-known
mountPath: /usr/share/nginx/html/.well-known/matrix
volumes:
- name: well-known
configMap:
name: well-known
---
apiVersion: v1
kind: Service
metadata:
name: well-known
namespace: chat
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
name: web
selector:
app: well-known
---
apiVersion: v1
kind: ConfigMap
metadata:
name: well-known
namespace: chat
data:
server: |-
{
"m.server": "matrix.cluster.fun:443"
}

View File

@@ -0,0 +1,126 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: matrix
namespace: chat
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
ingressClassName: nginx
tls:
- hosts:
- matrix.cluster.fun
secretName: matrix-ingress
rules:
- host: matrix.cluster.fun
http:
paths:
- path: /.well-known/matrix
pathType: ImplementationSpecific
backend:
service:
name: well-known
port:
number: 80
- path: /
pathType: ImplementationSpecific
backend:
service:
name: chat-matrix-synapse
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: riot
namespace: chat
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
ingressClassName: nginx
tls:
- hosts:
- chat.cluster.fun
secretName: riot-ingress
rules:
- host: chat.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: chat-matrix-riot
port:
number: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: well-known
namespace: chat
annotations:
configmap.reloader.stakater.com/reload: "well-known"
spec:
replicas: 1
selector:
matchLabels:
app: well-known
template:
metadata:
labels:
app: well-known
spec:
containers:
- name: web
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: well-known
mountPath: /usr/share/nginx/html/.well-known/matrix
resources:
limits:
memory: 10Mi
requests:
memory: 10Mi
volumes:
- name: well-known
configMap:
name: well-known
---
apiVersion: v1
kind: Service
metadata:
name: well-known
namespace: chat
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
name: web
selector:
app: well-known
---
apiVersion: v1
kind: ConfigMap
metadata:
name: well-known
namespace: chat
data:
server: |-
{
"m.server": "matrix.cluster.fun:443"
}

View File

@@ -0,0 +1,97 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: auth-proxy
labels:
app: grafana
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- grafana.cluster.fun
secretName: grafana-ingress
rules:
- host: grafana.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus
namespace: auth-proxy
labels:
app: prometheus
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- prometheus.cluster.fun
secretName: prometheus-ingress
rules:
- host: prometheus.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: auth-proxy
port:
number: 80
---
apiVersion: v1
kind: Secret
metadata:
name: prometheus-credentials
namespace: monitoring
annotations:
kube-1password: m7c2n5gqybiyxj6ylydju2nljm
kube-1password/vault: Kubernetes
kube-1password/password-key: auth
type: Opaque
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-cloud
namespace: monitoring
labels:
app: prometheus-cloud
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: prometheus-credentials
nginx.ingress.kubernetes.io/auth-secret-type: auth-file
spec:
ingressClassName: nginx
tls:
- hosts:
- prometheus-cloud.cluster.fun
secretName: prometheus-cloud-ingress
rules:
- host: prometheus-cloud.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: prometheus-server
port:
number: 80

View File

@@ -0,0 +1,255 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-state-metrics
namespace: monitoring
labels:
app.kubernetes.io/name: kube-state-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
name: kube-state-metrics
rules:
- apiGroups: ["certificates.k8s.io"]
resources:
- certificatesigningrequests
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources:
- cronjobs
verbs: ["list", "watch"]
- apiGroups: ["extensions", "apps"]
resources:
- daemonsets
verbs: ["list", "watch"]
- apiGroups: ["extensions", "apps"]
resources:
- deployments
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- endpoints
verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs: ["list", "watch"]
- apiGroups: ["extensions", "networking.k8s.io"]
resources:
- ingresses
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources:
- jobs
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- limitranges
verbs: ["list", "watch"]
- apiGroups: ["admissionregistration.k8s.io"]
resources:
- mutatingwebhookconfigurations
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- persistentvolumeclaims
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- persistentvolumes
verbs: ["list", "watch"]
- apiGroups: ["policy"]
resources:
- poddisruptionbudgets
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- pods
verbs: ["list", "watch"]
- apiGroups: ["extensions", "apps"]
resources:
- replicasets
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- replicationcontrollers
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- resourcequotas
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- secrets
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- services
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
- apiGroups: ["admissionregistration.k8s.io"]
resources:
- validatingwebhookconfigurations
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- volumeattachments
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: monitoring
---
apiVersion: v1
kind: Service
metadata:
name: kube-state-metrics
namespace: monitoring
labels:
app.kubernetes.io/name: kube-state-metrics
annotations:
prometheus.io/scrape: 'true'
spec:
type: "ClusterIP"
ports:
- name: "http"
protocol: TCP
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: kube-state-metrics
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: monitoring
labels:
app.kubernetes.io/name: kube-state-metrics
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
spec:
serviceAccountName: kube-state-metrics
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsUser: 65534
containers:
- name: kube-state-metrics
args:
- --resources=certificatesigningrequests
- --resources=configmaps
- --resources=cronjobs
- --resources=daemonsets
- --resources=deployments
- --resources=endpoints
- --resources=horizontalpodautoscalers
- --resources=ingresses
- --resources=jobs
- --resources=limitranges
- --resources=mutatingwebhookconfigurations
- --resources=namespaces
- --resources=networkpolicies
- --resources=nodes
- --resources=persistentvolumeclaims
- --resources=persistentvolumes
- --resources=poddisruptionbudgets
- --resources=pods
- --resources=replicasets
- --resources=replicationcontrollers
- --resources=resourcequotas
- --resources=secrets
- --resources=services
- --resources=statefulsets
- --resources=storageclasses
- --resources=validatingwebhookconfigurations
- --resources=volumeattachments
imagePullPolicy: IfNotPresent
image: "k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.1.0"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
---

View File

@@ -0,0 +1,87 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-node-exporter
namespace: monitoring
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: node-exporter
---
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: node-exporter
name: prometheus-node-exporter
namespace: monitoring
spec:
clusterIP: None
ports:
- name: metrics
port: 9100
protocol: TCP
targetPort: 9100
selector:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: node-exporter
type: "ClusterIP"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: node-exporter
name: prometheus-node-exporter
namespace: monitoring
spec:
selector:
matchLabels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: node-exporter
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: node-exporter
spec:
serviceAccountName: prometheus-node-exporter
containers:
- name: prometheus-node-exporter
image: "prom/node-exporter:v1.1.2"
imagePullPolicy: "IfNotPresent"
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --no-collector.wifi
- --no-collector.hwmon
- --no-collector.netclass
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
- --web.listen-address=:9100
ports:
- name: metrics
containerPort: 9100
hostPort: 9100
volumeMounts:
- name: proc
mountPath: /host/proc
readOnly: true
- name: sys
mountPath: /host/sys
readOnly: true
hostNetwork: true
hostPID: true
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
---

View File

@@ -0,0 +1,491 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-server
namespace: monitoring
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-server
namespace: monitoring
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
data:
alerting_rules.yml: |
{}
alerts: |
{}
prometheus.yml: |
global:
evaluation_interval: 1m
scrape_interval: 1m
scrape_timeout: 10s
rule_files:
- /etc/config/recording_rules.yml
- /etc/config/alerting_rules.yml
- /etc/config/rules
- /etc/config/alerts
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
job_name: kubernetes-nodes
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- replacement: kubernetes.default.svc:443
target_label: __address__
- regex: (.+)
replacement: /api/v1/nodes/$1/proxy/metrics
source_labels:
- __meta_kubernetes_node_name
target_label: __metrics_path__
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
job_name: kubernetes-nodes-cadvisor
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- replacement: kubernetes.default.svc:443
target_label: __address__
- regex: (.+)
replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
source_labels:
- __meta_kubernetes_node_name
target_label: __metrics_path__
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
- job_name: kubernetes-service-endpoints
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scrape
- action: replace
regex: (https?)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scheme
target_label: __scheme__
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_service_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_service_name
target_label: kubernetes_name
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: kubernetes_node
- job_name: kubernetes-service-endpoints-slow
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scrape_slow
- action: replace
regex: (https?)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scheme
target_label: __scheme__
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_service_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_service_name
target_label: kubernetes_name
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: kubernetes_node
scrape_interval: 5m
scrape_timeout: 30s
- honor_labels: true
job_name: prometheus-pushgateway
kubernetes_sd_configs:
- role: service
relabel_configs:
- action: keep
regex: pushgateway
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_probe
- job_name: kubernetes-services
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module:
- http_2xx
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_probe
- source_labels:
- __address__
target_label: __param_target
- replacement: blackbox
target_label: __address__
- source_labels:
- __param_target
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: kubernetes_name
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
- action: drop
regex: Pending|Succeeded|Failed
source_labels:
- __meta_kubernetes_pod_phase
- job_name: kubernetes-pods-slow
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape_slow
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
- action: drop
regex: Pending|Succeeded|Failed
source_labels:
- __meta_kubernetes_pod_phase
scrape_interval: 5m
scrape_timeout: 30s
- job_name: 'prometheus-blackbox-exporter-ping'
metrics_path: /probe
params:
module: [icmp_ping]
static_configs:
- targets: []
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
- job_name: 'prometheus-blackbox-exporter-http'
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets: []
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
- job_name: 'node-exporter'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name]
regex: 'node-exporter'
action: keep
recording_rules.yml: |
{}
rules: |
{}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-server
namespace: monitoring
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "8Gi"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
name: prometheus-server
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/proxy
- nodes/metrics
- services
- endpoints
- pods
- ingresses
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
- ingresses
verbs:
- get
- list
- watch
- nonResourceURLs:
- "/metrics"
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
name: prometheus-server
subjects:
- kind: ServiceAccount
name: prometheus-server
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-server
---
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
name: prometheus-server
namespace: monitoring
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9090
selector:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
sessionAffinity: None
type: "ClusterIP"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
name: prometheus-server
namespace: monitoring
spec:
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/component: server
spec:
serviceAccountName: prometheus-server
containers:
- name: prometheus-server-configmap-reload
image: "jimmidyson/configmap-reload:v0.5.0"
imagePullPolicy: "IfNotPresent"
args:
- --volume-dir=/etc/config
- --webhook-url=http://127.0.0.1:9090/-/reload
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
- name: prometheus-server
image: "prom/prometheus:v2.27.1"
imagePullPolicy: "IfNotPresent"
args:
- --storage.tsdb.retention.time=15d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
ports:
- containerPort: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 30
failureThreshold: 3
successThreshold: 1
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 30
failureThreshold: 3
successThreshold: 1
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: storage-volume
mountPath: /data
subPath: ""
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
terminationGracePeriodSeconds: 300
volumes:
- name: config-volume
configMap:
name: prometheus-server
- name: storage-volume
persistentVolumeClaim:
claimName: prometheus-server
---

View File

@@ -0,0 +1,313 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: promtail
namespace: monitoring
labels:
app.kubernetes.io/name: promtail
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: promtail
namespace: monitoring
labels:
app.kubernetes.io/name: promtail
spec:
allowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
hostIPC: false
hostNetwork: false
hostPID: false
privileged: false
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- hostPath
- projected
- downwardAPI
- emptyDir
---
apiVersion: v1
kind: ConfigMap
metadata:
name: promtail
namespace: monitoring
labels:
app.kubernetes.io/name: promtail
data:
promtail.yaml: |
client:
backoff_config:
max_period: 5m
max_retries: 10
min_period: 500ms
batchsize: 1048576
batchwait: 1s
external_labels: {}
timeout: 10s
positions:
filename: /run/promtail/positions.yaml
server:
http_listen_port: 3101
clients:
- url: http://loki.auth-proxy.svc:80/loki/api/v1/push
external_labels:
kubernetes_cluster: scaleway
target_config:
sync_period: 10s
scrape_configs:
- job_name: kubernetes-pods
pipeline_stages:
- docker: {}
- cri: {}
- match:
selector: '{app="weave-net"}'
action: drop
- match:
selector: '{filename=~".*konnectivity.*"}'
action: drop
- match:
selector: '{name=~".*"} |~ ".*/healthz.*"'
action: drop
- match:
selector: '{name=~".*"} |~ ".*/api/health.*"'
action: drop
- match:
selector: '{name=~".*"} |~ ".*kube-probe/.*"'
action: drop
- match:
selector: '{app="internal-proxy"}'
action: drop
- match:
selector: '{app="promtail"}'
action: drop
- match:
selector: '{app="ingress-nginx"}'
stages:
- json:
expressions:
request_host: host
request_path: path
request_method: method
response_status: status
- drop:
source: "request_path"
value: "/healthz"
- labels:
request_host:
request_method:
response_status:
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
action: replace
target_label: __tmp_controller_name
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app
- __tmp_controller_name
- __meta_kubernetes_pod_name
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: app
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_component
- __meta_kubernetes_pod_label_component
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: component
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: node_name
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
replacement: $1
separator: /
source_labels:
- namespace
- app
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- action: replace
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: replace
replacement: /var/log/pods/*$1/*.log
regex: true/(.*)
separator: /
source_labels:
- __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
- __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: promtail-clusterrole
labels:
app.kubernetes.io/name: promtail
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: promtail-clusterrolebinding
labels:
app.kubernetes.io/name: promtail
subjects:
- kind: ServiceAccount
name: promtail
namespace: monitoring
roleRef:
kind: ClusterRole
name: promtail-clusterrole
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: promtail
namespace: monitoring
labels:
app.kubernetes.io/name: promtail
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: [promtail]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: promtail
namespace: monitoring
labels:
app.kubernetes.io/name: promtail
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: promtail
subjects:
- kind: ServiceAccount
name: promtail
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: promtail
namespace: monitoring
labels:
app.kubernetes.io/name: promtail
annotations:
configmap.reloader.stakater.com/reload: "promtail"
spec:
selector:
matchLabels:
app.kubernetes.io/name: promtail
template:
metadata:
labels:
app.kubernetes.io/name: promtail
annotations:
prometheus.io/port: http-metrics
prometheus.io/scrape: "true"
spec:
serviceAccountName: promtail
containers:
- name: promtail
image: "grafana/promtail:2.4.1"
imagePullPolicy: IfNotPresent
args:
- "-config.file=/etc/promtail/promtail.yaml"
volumeMounts:
- name: config
mountPath: /etc/promtail
- name: run
mountPath: /run/promtail
- mountPath: /var/lib/docker/containers
name: docker
readOnly: true
- mountPath: /var/log/pods
name: pods
readOnly: true
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 3101
name: http-metrics
securityContext:
readOnlyRootFilesystem: true
runAsGroup: 0
runAsUser: 0
readinessProbe:
failureThreshold: 5
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- name: config
configMap:
name: promtail
- name: run
hostPath:
path: /run/promtail
- hostPath:
path: /var/lib/docker/containers
name: docker
- hostPath:
path: /var/log/pods
name: pods
---

View File

@@ -1,10 +1,4 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: nextcloud
---
apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: nextcloud-values name: nextcloud-values
@@ -23,9 +17,9 @@ metadata:
namespace: nextcloud namespace: nextcloud
spec: spec:
chart: chart:
repository: https://kubernetes-charts.storage.googleapis.com repository: https://nextcloud.github.io/helm/
name: nextcloud name: nextcloud
version: 1.10.0 version: 2.6.3
maxHistory: 5 maxHistory: 5
valuesFrom: valuesFrom:
- secretKeyRef: - secretKeyRef:
@@ -35,14 +29,15 @@ spec:
optional: false optional: false
values: values:
image: image:
tag: 18-apache tag: 21.0.1-apache
pullPolicy: IfNotPresent
replicaCount: 1
ingress: ingress:
enabled: true enabled: true
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https nginx.ingress.kubernetes.io/proxy-body-size: "0"
traefik.ingress.kubernetes.io/redirect-permanent: "true"
tls: tls:
- hosts: - hosts:
- nextcloud.cluster.fun - nextcloud.cluster.fun
@@ -53,6 +48,8 @@ spec:
enabled: true enabled: true
storageClass: scw-bssd-retain storageClass: scw-bssd-retain
size: 5Gi size: 5Gi
redis:
enabled: true
cronjob: cronjob:
enabled: true enabled: true
resources: resources:

View File

@@ -0,0 +1,183 @@
kind: Service
apiVersion: v1
metadata:
name: nginx-ingress-service
namespace: kube-system
spec:
selector:
app.kubernetes.io/name: ingress-nginx
ports:
- protocol: TCP
port: 80
name: http
- protocol: TCP
port: 443
name: https
type: LoadBalancer
---
kind: Service
apiVersion: v1
metadata:
name: nginx-ingress-service-metrics
namespace: kube-system
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
spec:
selector:
app.kubernetes.io/name: ingress-nginx
ports:
- protocol: TCP
port: 10254
targetPort: 10254
name: metrics
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: kapsule-ingress
meta.helm.sh/release-namespace: kube-system
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
k8s.scw.cloud/ingress: nginx
k8s.scw.cloud/object: ConfigMap
k8s.scw.cloud/system: ingress
name: ingress-nginx-configuration
namespace: kube-system
data:
log-format-upstream: '{"time": "$time_iso8601", "request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status": $status, "host": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent", "redirect_location": "$redirect_location" }'
plugins: "redirect_location"
location-snippet: |
set $redirect_location '';
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: kapsule-ingress
meta.helm.sh/release-namespace: kube-system
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
k8s.scw.cloud/ingress: nginx
k8s.scw.cloud/object: ConfigMap
k8s.scw.cloud/system: ingress
name: ingress-nginx-plugin-redirect-location
namespace: kube-system
data:
main.lua: |
local ngx = ngx
local _M = {}
function _M.header_filter()
ngx.var.redirect_location = ngx.resp.get_headers()["Location"]
end
return _M
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
meta.helm.sh/release-name: kapsule-ingress
meta.helm.sh/release-namespace: kube-system
configmap.reloader.stakater.com/reload: "ingress-nginx-configuration"
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
k8s.scw.cloud/ingress: nginx
k8s.scw.cloud/object: DaemonSet
k8s.scw.cloud/system: ingress
name: nginx-ingress
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader-nginx
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --watch-ingress-without-class
- --enable-metrics
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.1
imagePullPolicy: IfNotPresent
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
- containerPort: 10254
name: http-metrics
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- name: plugins
mountPath: /etc/nginx/lua/plugins/redirect_location
volumes:
- name: plugins
configMap:
name: ingress-nginx-plugin-redirect-location

View File

@@ -1,9 +1,4 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: node-red
---
apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: node-red name: node-red
@@ -73,7 +68,7 @@ spec:
mountPath: /data mountPath: /data
containers: containers:
- name: web - name: web
image: nodered/node-red:latest-12 image: nodered/node-red:1.1.3-12
imagePullPolicy: Always imagePullPolicy: Always
ports: ports:
- containerPort: 1880 - containerPort: 1880
@@ -89,16 +84,14 @@ spec:
persistentVolumeClaim: persistentVolumeClaim:
claimName: node-red claimName: node-red
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: node-red name: node-red
namespace: node-red namespace: node-red
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
tls: tls:
- hosts: - hosts:
@@ -109,6 +102,9 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: node-red service:
servicePort: 80 name: node-red
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: node-red
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: node-red
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,68 @@
apiVersion: v1
kind: Service
metadata:
name: opengraph
namespace: opengraph
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: opengraph
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: opengraph
namespace: opengraph
spec:
replicas: 2
selector:
matchLabels:
app: opengraph
template:
metadata:
labels:
app: opengraph
spec:
containers:
- name: web
image: docker.cluster.fun/averagemarcus/opengraph-image-gen:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
name: web
resources:
limits:
memory: 200Mi
requests:
memory: 200Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: opengraph
namespace: opengraph
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- opengraph.cluster.fun
secretName: opengraph-ingress
rules:
- host: opengraph.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: opengraph
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: opengraph
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: opengraph
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,132 @@
apiVersion: v1
kind: Secret
metadata:
name: outline
namespace: outline
annotations:
kube-1password: maouivotrbgydslnsukbjrwgja
kube-1password/vault: Kubernetes
kube-1password/secret-text-key: .env
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: outline
namespace: outline
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: outline
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: outline
namespace: outline
spec:
selector:
matchLabels:
app: outline
serviceName: outline
replicas: 1
template:
metadata:
labels:
app: outline
spec:
containers:
- name: postgres
image: postgres:9-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
name: db
env:
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: pass
- name: POSTGRES_DB
value: outline
- name: PGDATA
value: /var/lib/postgresql/data/outline
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
- name: redis
image: redis:6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
name: redis
- name: outline
image: outlinewiki/outline:0.60.3
imagePullPolicy: IfNotPresent
# command:
# - sh
# - -c
# - |
# sleep 10
# yarn db:migrate --env=production-ssl-disabled
# echo "Done"
# sleep 300
# exit 1
env:
- name: PGSSLMODE
value: disable
- name: ALLOWED_DOMAINS
value: marcusnoble.co.uk
- name: OIDC_SCOPES
value: "openid profile email"
ports:
- containerPort: 3000
name: web
volumeMounts:
- mountPath: /opt/outline/.env
subPath: .env
name: outline-env
readOnly: true
volumes:
- name: outline-env
secret:
secretName: outline
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: outline
namespace: outline
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- outline.cluster.fun
secretName: outline-ingress
rules:
- host: outline.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: outline
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: outline
spec:
targetRef:
apiVersion: "apps/v1"
kind: StatefulSet
name: outline
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,125 @@
apiVersion: v1
kind: Secret
metadata:
name: docker-config
namespace: paradoxfox
annotations:
kube-1password: i6ngbk5zf4k52xgwdwnfup5bby
kube-1password/vault: Kubernetes
kube-1password/secret-text-key: .dockerconfigjson
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: e30=
---
apiVersion: v1
kind: Secret
metadata:
name: etsy-token
namespace: paradoxfox
annotations:
kube-1password: akkchysgrvhawconx63plt3xgy
kube-1password/vault: Kubernetes
kube-1password/secret-text-key: password
stringData:
password: ""
---
apiVersion: v1
kind: Service
metadata:
name: paradoxfox
namespace: paradoxfox
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 443
name: web
selector:
app: paradoxfox
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: paradoxfox
namespace: paradoxfox
spec:
replicas: 1
selector:
matchLabels:
app: paradoxfox
template:
metadata:
labels:
app: paradoxfox
spec:
imagePullSecrets:
- name: docker-config
containers:
- name: web
image: docker.cluster.fun/private/paradoxfox:latest
imagePullPolicy: Always
ports:
- containerPort: 443
name: web
env:
- name: ETSY_TOKEN
valueFrom:
secretKeyRef:
name: etsy-token
key: password
resources:
limits:
memory: 200Mi
requests:
memory: 200Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: paradoxfox
namespace: paradoxfox
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- paradoxfox.space
secretName: paradoxfox-ingress
rules:
- host: paradoxfox.space
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: paradoxfox
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: paradoxfox-www
namespace: paradoxfox
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- www.paradoxfox.space
secretName: paradoxfox-www-ingress
rules:
- host: www.paradoxfox.space
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: paradoxfox
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: paradoxfox
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: paradoxfox
updatePolicy:
updateMode: "Auto"

View File

@@ -1,9 +1,4 @@
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata:
name: qr
---
apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: qr name: qr
@@ -41,21 +36,20 @@ spec:
name: web name: web
resources: resources:
limits: limits:
memory: 100Mi memory: 20Mi
requests: requests:
memory: 100Mi memory: 20Mi
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: qr name: qr
namespace: qr namespace: qr
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- qr.cluster.fun - qr.cluster.fun
@@ -65,7 +59,10 @@ spec:
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: qr service:
servicePort: 80 name: qr
port:
number: 80

11
manifests/qr/vpa.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: qr
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: qr
updatePolicy:
updateMode: "Auto"

View File

@@ -1,105 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: rss
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rss
namespace: rss
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: rss
namespace: rss
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: web
selector:
app: rss
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rss
namespace: rss
labels:
app: rss
spec:
replicas: 1
selector:
matchLabels:
app: rss
template:
metadata:
labels:
app: rss
spec:
securityContext:
fsGroup: 1000
dnsConfig:
options:
- name: ndots
value: "2"
containers:
- name: web
image: mdswanson/stringer
env:
- name: SECRET_TOKEN
value: inward-popcorn-decamp-epsilon
- name: PORT
value: "8080"
- name: DATABASE_URL
value: sqlite3:/data/stringer.db
ports:
- containerPort: 8080
name: web
resources:
limits:
memory: 308Mi
requests:
memory: 308Mi
volumeMounts:
- mountPath: /data
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: rss
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rss
namespace: rss
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
tls:
- hosts:
- rss.cluster.fun
secretName: rss-ingress
rules:
- host: rss.cluster.fun
http:
paths:
- path: /
backend:
serviceName: rss
servicePort: 80
---

View File

@@ -1,61 +1,57 @@
kind: PersistentVolumeClaim
apiVersion: v1 apiVersion: v1
kind: Namespace
metadata: metadata:
name: website-to-remarkable name: rss-db
namespace: rss
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: website-to-remarkable-auth name: rss-auth
namespace: website-to-remarkable namespace: rss
annotations: annotations:
kube-1password: mr6spkkx7n3memkbute6ojaarm kube-1password: mr6spkkx7n3memkbute6ojaarm
kube-1password/vault: Kubernetes kube-1password/vault: Kubernetes
type: Opaque type: Opaque
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret
metadata:
name: website-to-remarkable
namespace: website-to-remarkable
annotations:
kube-1password: smp3qkv74qt72ttzkltyhiktja
kube-1password/vault: Kubernetes
type: Opaque
---
apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: website-to-remarkable name: rss-new
namespace: website-to-remarkable namespace: rss
spec: spec:
type: ClusterIP type: ClusterIP
ports: ports:
- port: 80 - port: 80
targetPort: 8080
name: web
- port: 8000
targetPort: 8000 targetPort: 8000
name: noauth name: web
selector: selector:
app: website-to-remarkable app: rss
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: website-to-remarkable name: rss
namespace: website-to-remarkable namespace: rss
labels: labels:
app: website-to-remarkable app: rss
spec: spec:
replicas: 1 replicas: 1
strategy:
type: Recreate
selector: selector:
matchLabels: matchLabels:
app: website-to-remarkable app: rss
template: template:
metadata: metadata:
labels: labels:
app: website-to-remarkable app: rss
spec: spec:
dnsConfig: dnsConfig:
options: options:
@@ -66,9 +62,9 @@ spec:
- --cookie-secure=false - --cookie-secure=false
- --provider=oidc - --provider=oidc
- --provider-display-name=Auth0 - --provider-display-name=Auth0
- --upstream=http://localhost:8000 - --upstream=http://localhost:8080
- --http-address=$(HOST_IP):8080 - --http-address=$(HOST_IP):8000
- --redirect-url=https://website-to-remarkable.cluster.fun/oauth2/callback - --redirect-url=https://rss.cluster.fun/oauth2/callback
- --email-domain=marcusnoble.co.uk - --email-domain=marcusnoble.co.uk
- --pass-basic-auth=false - --pass-basic-auth=false
- --pass-access-token=false - --pass-access-token=false
@@ -84,57 +80,74 @@ spec:
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
key: username key: username
name: website-to-remarkable-auth name: rss-auth
- name: OAUTH2_PROXY_CLIENT_SECRET - name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
key: password key: password
name: website-to-remarkable-auth name: rss-auth
image: quay.io/oauth2-proxy/oauth2-proxy:v5.1.1 image: quay.io/oauth2-proxy/oauth2-proxy:v5.1.1
name: oauth-proxy name: oauth-proxy
ports: ports:
- containerPort: 8080 - containerPort: 8000
protocol: TCP protocol: TCP
resources: resources:
limits: limits:
memory: 125Mi memory: 50Mi
requests: requests:
memory: 125Mi memory: 50Mi
- name: web - name: web
image: docker.cluster.fun/averagemarcus/website-to-remarkable:latest image: docker.cluster.fun/averagemarcus/gopherss:latest
imagePullPolicy: Always
env: env:
- name: REMARKABLE_TOKEN - name: PORT
valueFrom: value: "8080"
secretKeyRef: - name: DB_PATH
name: website-to-remarkable value: /data/feeds.db
key: password
ports: ports:
- containerPort: 8000 - containerPort: 8080
name: web name: web
resources:
limits:
memory: 308Mi
requests:
memory: 308Mi
volumeMounts:
- mountPath: /data
name: storage
resources:
limits:
memory: 100Mi
requests:
memory: 100Mi
volumes:
- name: storage
persistentVolumeClaim:
claimName: rss-db
--- ---
apiVersion: extensions/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
name: website-to-remarkable name: rss
namespace: website-to-remarkable namespace: rss
annotations: annotations:
cert-manager.io/cluster-issuer: letsencrypt cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/frontend-entry-points: http,https nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec: spec:
ingressClassName: nginx
tls: tls:
- hosts: - hosts:
- website-to-remarkable.cluster.fun - rss.cluster.fun
secretName: website-to-remarkable-ingress secretName: rss-ingress
rules: rules:
- host: website-to-remarkable.cluster.fun - host: rss.cluster.fun
http: http:
paths: paths:
- path: / - path: /
pathType: ImplementationSpecific
backend: backend:
serviceName: website-to-remarkable service:
servicePort: 80 name: rss-new
port:
number: 80
--- ---

11
manifests/rss/vpa.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: rss
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: rss
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,106 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: skooner-user
labels:
app.kubernetes.io/name: skooner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: skooner-user
labels:
app.kubernetes.io/name: skooner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: skooner-user
namespace: skooner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: skooner
labels:
app.kubernetes.io/name: skooner
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: skooner
template:
metadata:
labels:
app.kubernetes.io/name: skooner
spec:
containers:
- name: skooner
image: ghcr.io/skooner-k8s/skooner:stable
imagePullPolicy: Always
ports:
- containerPort: 4654
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 4654
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
name: skooner
labels:
app.kubernetes.io/name: skooner
spec:
ports:
- port: 80
targetPort: 4654
name: web
selector:
app.kubernetes.io/name: skooner
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: skooner
labels:
app.kubernetes.io/name: skooner
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
ingressClassName: nginx
tls:
- hosts:
- skooner.cluster.fun
secretName: skooner-ingress
rules:
- host: skooner.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: skooner
port:
name: web

View File

@@ -0,0 +1,64 @@
apiVersion: v1
kind: Service
metadata:
name: svg-to-dxf
namespace: svg-to-dxf
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: svg-to-dxf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: svg-to-dxf
namespace: svg-to-dxf
spec:
replicas: 1
selector:
matchLabels:
app: svg-to-dxf
template:
metadata:
labels:
app: svg-to-dxf
spec:
containers:
- name: web
image: docker.cluster.fun/averagemarcus/svg-to-dxf:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: web
resources:
requests:
memory: 100Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: svg-to-dxf
namespace: svg-to-dxf
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- svg-to-dxf.cluster.fun
secretName: svg-to-dxf-ingress
rules:
- host: svg-to-dxf.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: svg-to-dxf
port:
number: 80

View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: svg-to-dxf
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: svg-to-dxf
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,68 @@
apiVersion: v1
kind: Service
metadata:
name: talks
namespace: talks
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: talks
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: talks
namespace: talks
spec:
replicas: 2
selector:
matchLabels:
app: talks
template:
metadata:
labels:
app: talks
spec:
containers:
- name: web
image: docker.cluster.fun/averagemarcus/talks:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
resources:
limits:
memory: 50Mi
requests:
memory: 50Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: talks
namespace: talks
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- talks.marcusnoble.co.uk
secretName: talks-ingress
rules:
- host: talks.marcusnoble.co.uk
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: talks
port:
number: 80

11
manifests/talks/vpa.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: talks
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: talks
updatePolicy:
updateMode: "Auto"

58
manifests/tank/tank.yaml Normal file
View File

@@ -0,0 +1,58 @@
apiVersion: v1
kind: Secret
metadata:
name: tank
namespace: tank
annotations:
kube-1password: g6xle67quzowvvekf6zukjbbm4
kube-1password/vault: Kubernetes
kube-1password/secret-text-parse: "true"
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: tank
namespace: tank
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
selector:
app: tank
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tank
namespace: tank
labels:
app: tank
spec:
replicas: 1
selector:
matchLabels:
app: tank
template:
metadata:
labels:
app: tank
spec:
containers:
- name: web
image: docker.cluster.fun/averagemarcus/tank:latest
imagePullPolicy: Always
envFrom:
- secretRef:
name: tank
ports:
- containerPort: 3000
name: web
resources:
limits:
memory: 10Mi
requests:
memory: 10Mi
---

11
manifests/tank/vpa.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: tank
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: tank
updatePolicy:
updateMode: "Auto"

View File

@@ -0,0 +1,64 @@
apiVersion: v1
kind: Service
metadata:
name: text-to-dxf
namespace: text-to-dxf
spec:
type: ClusterIP
ports:
- port: 80
targetPort: web
name: web
selector:
app: text-to-dxf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: text-to-dxf
namespace: text-to-dxf
spec:
replicas: 1
selector:
matchLabels:
app: text-to-dxf
template:
metadata:
labels:
app: text-to-dxf
spec:
containers:
- name: web
image: docker.cluster.fun/averagemarcus/text-to-dxf:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: web
resources:
requests:
memory: 100Mi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: text-to-dxf
namespace: text-to-dxf
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- text-to-dxf.cluster.fun
secretName: text-to-dxf-ingress
rules:
- host: text-to-dxf.cluster.fun
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: text-to-dxf
port:
number: 80

Some files were not shown because too many files have changed in this diff Show More