Compare commits
21 Commits
v3.2.0
...
152-altern
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fb2642de2c | ||
| ffc7f29688 | |||
| 2c30bff45d | |||
| dc7a0ae6b7 | |||
| d3cf53c609 | |||
|
|
0555277c50 | ||
|
|
aa4b168c44 | ||
|
|
650f1cad92 | ||
|
|
8eebd424c8 | ||
|
|
a1a3aaca18 | ||
|
|
d779830df6 | ||
| 4375cd3ebc | |||
| b0c7c13e5e | |||
|
|
bb4db5d342 | ||
|
|
64df0e0b32 | ||
|
|
72c9634832 | ||
| a4cfc53581 | |||
| d4feefd639 | |||
| 434efc79d8 | |||
| 54771b2d78 | |||
| fceb36c723 |
3
.gitattributes
vendored
3
.gitattributes
vendored
@@ -1 +1,2 @@
|
||||
*.tsx linguist-detectable=false
|
||||
*.tsx linguist-detectable=false
|
||||
*.html linguist-detectable=false
|
||||
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -1,3 +1,7 @@
|
||||
.pre-commit-config.yaml
|
||||
.direnv/
|
||||
result/
|
||||
result
|
||||
dist
|
||||
.pnpm-debug.log
|
||||
node_modules
|
||||
|
||||
79
README.md
79
README.md
@@ -3,12 +3,6 @@
|
||||
A not so terrible web ui for yt-dlp.
|
||||
Created for the only purpose of *fetching* videos from my server/nas.
|
||||
|
||||
Intended to be used with docker and in standalone mode. 😎👍
|
||||
|
||||
Developed to be as lightweight as possible (because my server is basically an intel atom sbc).
|
||||
|
||||
The bottleneck remains yt-dlp startup time.
|
||||
|
||||
**Docker images are available on [Docker Hub](https://hub.docker.com/r/marcobaobao/yt-dlp-webui) or [ghcr.io](https://github.com/marcopeocchi/yt-dlp-web-ui/pkgs/container/yt-dlp-web-ui)**.
|
||||
|
||||
```sh
|
||||
@@ -19,45 +13,9 @@ docker pull marcobaobao/yt-dlp-webui
|
||||
docker pull ghcr.io/marcopeocchi/yt-dlp-web-ui:latest
|
||||
```
|
||||
|
||||
## Video showcase
|
||||
[app.webm](https://github.com/marcopeocchi/yt-dlp-web-ui/assets/35533749/91545bc4-233d-4dde-8504-27422cb26964)
|
||||
|
||||
|
||||

|
||||

|
||||
|
||||
### Integrated File browser
|
||||
Stream or download your content, easily.
|
||||
|
||||

|
||||
|
||||
## Changelog
|
||||
```
|
||||
05/03/22: Korean translation by kimpig
|
||||
|
||||
03/03/22: cut-down image size by switching to Alpine linux based container
|
||||
|
||||
01/03/22: Chinese translation by deluxghost
|
||||
|
||||
03/02/22: i18n enabled! I need help with the translations :/
|
||||
|
||||
27/01/22: Multidownload implemented!
|
||||
|
||||
26/01/22: Multiple downloads are being implemented. Maybe by next release they will be there.
|
||||
Refactoring and JSDoc.
|
||||
|
||||
04/01/22: Background jobs now are retrieved!! It's still rudimentary but it leverages on yt-dlp resume feature.
|
||||
|
||||
05/05/22: Material UI update.
|
||||
|
||||
03/06/22: The most requested feature finally implemented: Format Selection!!
|
||||
|
||||
08/06/22: ARM builds.
|
||||
|
||||
28/06/22: Reworked resume download feature. Now it's pratically instantaneous. It no longer stops and restarts each process, references to each process are saved in memory.
|
||||
|
||||
12/01/23: Switched from TypeScript to Golang on the backend. It was a great effort but it was worth it.
|
||||
```
|
||||
|
||||
## Settings
|
||||
|
||||
The currently avaible settings are:
|
||||
@@ -71,23 +29,13 @@ The currently avaible settings are:
|
||||
- Pass custom yt-dlp arguments safely
|
||||
- Download queue (limit concurrent downloads)
|
||||
|
||||

|
||||

|
||||
|
||||
## Format selection
|
||||
|
||||
This feature is disabled by default as this intended to be used to retrieve the best quality automatically.
|
||||
|
||||
To enable it just go to the settings page and enable the **Enable video/audio formats selection** flag!
|
||||
|
||||
## Troubleshooting
|
||||
- **It says that it isn't connected/ip in the header is not defined.**
|
||||
- You must set the server ip address in the settings section (gear icon).
|
||||
- **The download doesn't start.**
|
||||
- As before server address is not specified or simply yt-dlp process takes a lot of time to fire up. (Forking yt-dlp isn't fast especially if you have a lower-end/low-power NAS/server/desktop where the server is running)
|
||||
|
||||
## [Docker](https://github.com/marcopeocchi/yt-dlp-web-ui/pkgs/container/yt-dlp-web-ui) installation
|
||||
## Docker run
|
||||
## [Docker](https://github.com/marcopeocchi/yt-dlp-web-ui/pkgs/container/yt-dlp-web-ui) run
|
||||
```sh
|
||||
docker pull marcobaobao/yt-dlp-webui
|
||||
docker run -d -p 3033:3033 -v <your dir>:/downloads marcobaobao/yt-dlp-webui
|
||||
@@ -177,7 +125,7 @@ Usage yt-dlp-webui:
|
||||
-port int
|
||||
Port where server will listen at (default 3033)
|
||||
-qs int
|
||||
Download queue size (default 8)
|
||||
Download queue size (defaults to the number of logical CPU. A min of 2 is recomended.)
|
||||
-user string
|
||||
Username required for auth
|
||||
-pass string
|
||||
@@ -187,6 +135,7 @@ Usage yt-dlp-webui:
|
||||
### Config file
|
||||
By running `yt-dlp-webui` in standalone mode you have the ability to also specify a config file.
|
||||
The config file **will overwrite what have been passed as cli argument**.
|
||||
With Docker, inside the mounted `/conf` volume inside there must be a file named `config.yml`.
|
||||
|
||||
```yaml
|
||||
# Simple configuration file for yt-dlp webui
|
||||
@@ -284,17 +233,17 @@ Want to build your own frontend? We got you covered 🤠
|
||||
`yt-dlp-webui` now exposes a nice **JSON-RPC 1.0** interface through Websockets and HTTP-POST
|
||||
It is **planned** to also expose a **gRPC** server.
|
||||
|
||||
Just as an overview, these are the available methods:
|
||||
- Service.Exec
|
||||
- Service.Progress
|
||||
- Service.Formats
|
||||
- Service.Pending
|
||||
- Service.Running
|
||||
- Service.Kill
|
||||
- Service.KillAll
|
||||
- Service.Clear
|
||||
|
||||
For more information open an issue on GitHub and I will provide more info ASAP.
|
||||
|
||||
## Nix
|
||||
This repo adds support for Nix(OS) in various ways through a `flake-parts` flake.
|
||||
For more info, please refer to the [official documentation](https://nixos.org/learn/).
|
||||
|
||||
## What yt-dlp-webui is not
|
||||
`yt-dlp-webui` isn't your ordinary website where to download stuff from the internet, so don't try asking for links of where this is hosted. It's a self hosted platform for a Linux NAS.
|
||||
|
||||
## Troubleshooting
|
||||
- **It says that it isn't connected/ip in the header is not defined.**
|
||||
- You must set the server ip address in the settings section (gear icon).
|
||||
- **The download doesn't start.**
|
||||
- As before server address is not specified or simply yt-dlp process takes a lot of time to fire up. (Forking yt-dlp isn't fast especially if you have a lower-end/low-power NAS/server/desktop where the server is running)
|
||||
|
||||
4
env.nix
4
env.nix
@@ -1,4 +0,0 @@
|
||||
{ pkgs ? import <nixpkgs> {} }:
|
||||
pkgs.mkShell {
|
||||
nativeBuildInputs = with pkgs.buildPackages; [ yt-dlp nodejs_22 yarn-berry go ];
|
||||
}
|
||||
149
flake.lock
generated
Normal file
149
flake.lock
generated
Normal file
@@ -0,0 +1,149 @@
|
||||
{
|
||||
"nodes": {
|
||||
"flake-compat": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1696426674,
|
||||
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
|
||||
"owner": "edolstra",
|
||||
"repo": "flake-compat",
|
||||
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "edolstra",
|
||||
"repo": "flake-compat",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-parts": {
|
||||
"inputs": {
|
||||
"nixpkgs-lib": "nixpkgs-lib"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1722555600,
|
||||
"narHash": "sha256-XOQkdLafnb/p9ij77byFQjDf5m5QYl9b2REiVClC+x4=",
|
||||
"owner": "hercules-ci",
|
||||
"repo": "flake-parts",
|
||||
"rev": "8471fe90ad337a8074e957b69ca4d0089218391d",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "hercules-ci",
|
||||
"repo": "flake-parts",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"gitignore": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"pre-commit-hooks-nix",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1709087332,
|
||||
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
|
||||
"owner": "hercules-ci",
|
||||
"repo": "gitignore.nix",
|
||||
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "hercules-ci",
|
||||
"repo": "gitignore.nix",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1723637854,
|
||||
"narHash": "sha256-med8+5DSWa2UnOqtdICndjDAEjxr5D7zaIiK4pn0Q7c=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "c3aa7b8938b17aebd2deecf7be0636000d62a2b9",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixos-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixpkgs-lib": {
|
||||
"locked": {
|
||||
"lastModified": 1722555339,
|
||||
"narHash": "sha256-uFf2QeW7eAHlYXuDktm9c25OxOyCoUOQmh5SZ9amE5Q=",
|
||||
"type": "tarball",
|
||||
"url": "https://github.com/NixOS/nixpkgs/archive/a5d394176e64ab29c852d03346c1fc9b0b7d33eb.tar.gz"
|
||||
},
|
||||
"original": {
|
||||
"type": "tarball",
|
||||
"url": "https://github.com/NixOS/nixpkgs/archive/a5d394176e64ab29c852d03346c1fc9b0b7d33eb.tar.gz"
|
||||
}
|
||||
},
|
||||
"nixpkgs-stable": {
|
||||
"locked": {
|
||||
"lastModified": 1720386169,
|
||||
"narHash": "sha256-NGKVY4PjzwAa4upkGtAMz1npHGoRzWotlSnVlqI40mo=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "194846768975b7ad2c4988bdb82572c00222c0d7",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixos-24.05",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixpkgs_2": {
|
||||
"locked": {
|
||||
"lastModified": 1719082008,
|
||||
"narHash": "sha256-jHJSUH619zBQ6WdC21fFAlDxHErKVDJ5fpN0Hgx4sjs=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "9693852a2070b398ee123a329e68f0dab5526681",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixpkgs-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pre-commit-hooks-nix": {
|
||||
"inputs": {
|
||||
"flake-compat": "flake-compat",
|
||||
"gitignore": "gitignore",
|
||||
"nixpkgs": "nixpkgs_2",
|
||||
"nixpkgs-stable": "nixpkgs-stable"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1723803910,
|
||||
"narHash": "sha256-yezvUuFiEnCFbGuwj/bQcqg7RykIEqudOy/RBrId0pc=",
|
||||
"owner": "cachix",
|
||||
"repo": "pre-commit-hooks.nix",
|
||||
"rev": "bfef0ada09e2c8ac55bbcd0831bd0c9d42e651ba",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "cachix",
|
||||
"repo": "pre-commit-hooks.nix",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"flake-parts": "flake-parts",
|
||||
"nixpkgs": "nixpkgs",
|
||||
"pre-commit-hooks-nix": "pre-commit-hooks-nix"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
||||
51
flake.nix
Normal file
51
flake.nix
Normal file
@@ -0,0 +1,51 @@
|
||||
{
|
||||
description = "A terrible web ui for yt-dlp. Designed to be self-hosted.";
|
||||
|
||||
inputs = {
|
||||
flake-parts.url = "github:hercules-ci/flake-parts";
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
|
||||
pre-commit-hooks-nix.url = "github:cachix/pre-commit-hooks.nix";
|
||||
};
|
||||
|
||||
outputs = inputs@{ self, flake-parts, ... }:
|
||||
flake-parts.lib.mkFlake { inherit inputs; } {
|
||||
imports = [
|
||||
inputs.pre-commit-hooks-nix.flakeModule
|
||||
];
|
||||
systems = [
|
||||
"x86_64-linux"
|
||||
];
|
||||
perSystem = { config, self', pkgs, ... }: {
|
||||
|
||||
packages = {
|
||||
yt-dlp-web-ui-frontend = pkgs.callPackage ./nix/frontend.nix { };
|
||||
default = pkgs.callPackage ./nix/server.nix {
|
||||
inherit (self'.packages) yt-dlp-web-ui-frontend;
|
||||
};
|
||||
};
|
||||
|
||||
checks = import ./nix/tests { inherit self pkgs; };
|
||||
|
||||
pre-commit = {
|
||||
check.enable = true;
|
||||
settings = {
|
||||
hooks = {
|
||||
${self'.formatter.pname}.enable = true;
|
||||
deadnix.enable = true;
|
||||
nil.enable = true;
|
||||
statix.enable = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
devShells.default = pkgs.callPackage ./nix/devShell.nix {
|
||||
inputsFrom = [ config.pre-commit.devShell ];
|
||||
};
|
||||
|
||||
formatter = pkgs.nixpkgs-fmt;
|
||||
};
|
||||
flake = {
|
||||
nixosModules.default = import ./nix/module.nix self.packages;
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -63,6 +63,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
german:
|
||||
urlInput: Video URL
|
||||
@@ -123,6 +124,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
french:
|
||||
urlInput: URL vidéo de YouTube ou d'un autre service pris en charge
|
||||
@@ -185,6 +187,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
italian:
|
||||
urlInput: URL Video (uno per linea)
|
||||
@@ -244,6 +247,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
chinese:
|
||||
urlInput: 视频 URL
|
||||
@@ -294,17 +298,18 @@ languages:
|
||||
templatesEditorContentLabel: 模板内容
|
||||
logsTitle: '日志'
|
||||
awaitingLogs: '正在等待日志…'
|
||||
bulkDownload: 'Download files in a zip archive'
|
||||
livestreamURLInput: Livestream URL
|
||||
livestreamStatusWaiting: Waiting/Wait start
|
||||
livestreamStatusDownloading: Downloading
|
||||
livestreamStatusCompleted: Completed
|
||||
livestreamStatusErrored: Errored
|
||||
livestreamStatusUnknown: Unknown
|
||||
bulkDownload: '下载 zip 压缩包中的文件'
|
||||
livestreamURLInput: 直播 URL
|
||||
livestreamStatusWaiting: 等待直播开始
|
||||
livestreamStatusDownloading: 下载中
|
||||
livestreamStatusCompleted: 已完成
|
||||
livestreamStatusErrored: 发生错误
|
||||
livestreamStatusUnknown: 未知
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
本功能将会监控即将开始的直播流,每个进程都会传入参数:--wait-for-video 10 (重试间隔10秒)
|
||||
如果直播已经开始,那么依然可以下载,但是不会记录下载进度。
|
||||
直播开始后,将会转移到下载页面
|
||||
livestreamExperimentalWarning: 实验性功能,可能存在未知Bug,请谨慎使用
|
||||
spanish:
|
||||
urlInput: URL de YouTube u otro servicio compatible
|
||||
statusTitle: Estado
|
||||
@@ -362,6 +367,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
russian:
|
||||
urlInput: URL-адрес YouTube или любого другого поддерживаемого сервиса
|
||||
@@ -420,6 +426,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
korean:
|
||||
urlInput: YouTube나 다른 지원되는 사이트의 URL
|
||||
@@ -478,6 +485,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
japanese:
|
||||
urlInput: YouTubeまたはサポート済み動画のURL
|
||||
@@ -537,6 +545,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
catalan:
|
||||
urlInput: URL de YouTube o d'un altre servei compatible
|
||||
@@ -595,6 +604,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
ukrainian:
|
||||
urlInput: URL-адреса YouTube або будь-якого іншого підтримуваного сервісу
|
||||
@@ -653,6 +663,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
polish:
|
||||
urlInput: Adres URL YouTube lub innej obsługiwanej usługi
|
||||
@@ -711,6 +722,7 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
swedish:
|
||||
urlInput: Videolänk (en per rad)
|
||||
@@ -775,4 +787,5 @@ languages:
|
||||
livestreamDownloadInfo: |
|
||||
This will monitor yet to start livestream. Each process will be executed with --wait-for-video 10.
|
||||
If an already started livestream is provided it will be still downloaded but its progress will not be tracked.
|
||||
Once started the livestream will be migrated to the downloads page.
|
||||
livestreamExperimentalWarning: This feature is still experimental. Something might break!
|
||||
|
||||
@@ -1,16 +1,15 @@
|
||||
import { atom, selector } from 'recoil'
|
||||
import { CustomTemplate } from '../types'
|
||||
import { ffetch } from '../lib/httpClient'
|
||||
import { serverURL } from './settings'
|
||||
import { pipe } from 'fp-ts/lib/function'
|
||||
import { getOrElse } from 'fp-ts/lib/Either'
|
||||
import { pipe } from 'fp-ts/lib/function'
|
||||
import { atom, selector } from 'recoil'
|
||||
import { ffetch } from '../lib/httpClient'
|
||||
import { CustomTemplate } from '../types'
|
||||
import { serverSideCookiesState, serverURL } from './settings'
|
||||
|
||||
export const cookiesTemplateState = atom({
|
||||
export const cookiesTemplateState = selector({
|
||||
key: 'cookiesTemplateState',
|
||||
default: localStorage.getItem('cookiesTemplate') ?? '',
|
||||
effects: [
|
||||
({ onSet }) => onSet(e => localStorage.setItem('cookiesTemplate', e))
|
||||
]
|
||||
get: ({ get }) => get(serverSideCookiesState)
|
||||
? '--cookies=cookies.txt'
|
||||
: ''
|
||||
})
|
||||
|
||||
export const customArgsState = atom({
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
import { pipe } from 'fp-ts/lib/function'
|
||||
import { matchW } from 'fp-ts/lib/TaskEither'
|
||||
import { atom, selector } from 'recoil'
|
||||
import { ffetch } from '../lib/httpClient'
|
||||
import { prefersDarkMode } from '../utils'
|
||||
|
||||
export const languages = [
|
||||
@@ -187,13 +190,15 @@ export const rpcHTTPEndpoint = selector({
|
||||
}
|
||||
})
|
||||
|
||||
export const cookiesState = atom({
|
||||
key: 'cookiesState',
|
||||
default: localStorage.getItem('yt-dlp-cookies') ?? '',
|
||||
effects: [
|
||||
({ onSet }) =>
|
||||
onSet(c => localStorage.setItem('yt-dlp-cookies', c))
|
||||
]
|
||||
export const serverSideCookiesState = selector<string>({
|
||||
key: 'serverSideCookiesState',
|
||||
get: async ({ get }) => await pipe(
|
||||
ffetch<Readonly<{ cookies: string }>>(`${get(serverURL)}/api/v1/cookies`),
|
||||
matchW(
|
||||
() => '',
|
||||
(r) => r.cookies
|
||||
)
|
||||
)()
|
||||
})
|
||||
|
||||
const themeSelector = selector<ThemeNarrowed>({
|
||||
|
||||
@@ -1,5 +1,10 @@
|
||||
import { pipe } from 'fp-ts/lib/function'
|
||||
import { of } from 'fp-ts/lib/Task'
|
||||
import { getOrElse } from 'fp-ts/lib/TaskEither'
|
||||
import { atom, selector } from 'recoil'
|
||||
import { ffetch } from '../lib/httpClient'
|
||||
import { rpcClientState } from './rpc'
|
||||
import { serverURL } from './settings'
|
||||
|
||||
export const connectedState = atom({
|
||||
key: 'connectedState',
|
||||
@@ -22,4 +27,15 @@ export const availableDownloadPathsState = selector({
|
||||
.catch(() => ({ result: [] }))
|
||||
return res.result
|
||||
}
|
||||
})
|
||||
|
||||
export const ytdlpVersionState = selector<string>({
|
||||
key: 'ytdlpVersionState',
|
||||
get: async ({ get }) => await pipe(
|
||||
ffetch<string>(`${get(serverURL)}/api/v1/version`),
|
||||
getOrElse(() => pipe(
|
||||
'unknown version',
|
||||
of
|
||||
)),
|
||||
)()
|
||||
})
|
||||
@@ -1,22 +1,20 @@
|
||||
import { TextField } from '@mui/material'
|
||||
import { Button, TextField } from '@mui/material'
|
||||
import * as A from 'fp-ts/Array'
|
||||
import * as E from 'fp-ts/Either'
|
||||
import * as O from 'fp-ts/Option'
|
||||
import { matchW } from 'fp-ts/lib/TaskEither'
|
||||
import { pipe } from 'fp-ts/lib/function'
|
||||
import { useMemo } from 'react'
|
||||
import { useRecoilState, useRecoilValue } from 'recoil'
|
||||
import { useRecoilValue } from 'recoil'
|
||||
import { Subject, debounceTime, distinctUntilChanged } from 'rxjs'
|
||||
import { cookiesTemplateState } from '../atoms/downloadTemplate'
|
||||
import { cookiesState, serverURL } from '../atoms/settings'
|
||||
import { serverSideCookiesState, serverURL } from '../atoms/settings'
|
||||
import { useSubscription } from '../hooks/observable'
|
||||
import { useToast } from '../hooks/toast'
|
||||
import { ffetch } from '../lib/httpClient'
|
||||
|
||||
const validateCookie = (cookie: string) => pipe(
|
||||
cookie,
|
||||
cookie => cookie.replace(/\s\s+/g, ' '),
|
||||
cookie => cookie.replaceAll('\t', ' '),
|
||||
cookie => cookie.split(' '),
|
||||
cookie => cookie.split('\t'),
|
||||
E.of,
|
||||
E.flatMap(
|
||||
E.fromPredicate(
|
||||
@@ -68,13 +66,19 @@ const validateCookie = (cookie: string) => pipe(
|
||||
),
|
||||
)
|
||||
|
||||
const noopValidator = (s: string): E.Either<string, string[]> => pipe(
|
||||
s,
|
||||
s => s.split('\t'),
|
||||
E.of
|
||||
)
|
||||
|
||||
const isCommentOrNewLine = (s: string) => s === '' || s.startsWith('\n') || s.startsWith('#')
|
||||
|
||||
const CookiesTextField: React.FC = () => {
|
||||
const serverAddr = useRecoilValue(serverURL)
|
||||
const [, setCookies] = useRecoilState(cookiesTemplateState)
|
||||
const [savedCookies, setSavedCookies] = useRecoilState(cookiesState)
|
||||
const savedCookies = useRecoilValue(serverSideCookiesState)
|
||||
|
||||
const { pushMessage } = useToast()
|
||||
const flag = '--cookies=cookies.txt'
|
||||
|
||||
const cookies$ = useMemo(() => new Subject<string>(), [])
|
||||
|
||||
@@ -86,28 +90,41 @@ const CookiesTextField: React.FC = () => {
|
||||
})
|
||||
})()
|
||||
|
||||
const deleteCookies = () => pipe(
|
||||
ffetch(`${serverAddr}/api/v1/cookies`, {
|
||||
method: 'DELETE',
|
||||
}),
|
||||
matchW(
|
||||
(l) => pushMessage(l, 'error'),
|
||||
(_) => {
|
||||
pushMessage('Deleted cookies', 'success')
|
||||
pushMessage(`Reload the page to apply the changes`, 'info')
|
||||
}
|
||||
)
|
||||
)()
|
||||
|
||||
const validateNetscapeCookies = (cookies: string) => pipe(
|
||||
cookies,
|
||||
cookies => cookies.split('\n'),
|
||||
cookies => cookies.filter(f => !f.startsWith('\n')), // empty lines
|
||||
cookies => cookies.filter(f => !f.startsWith('# ')), // comments
|
||||
cookies => cookies.filter(Boolean), // empty lines
|
||||
A.map(validateCookie),
|
||||
A.mapWithIndex((i, either) => pipe(
|
||||
A.map(c => isCommentOrNewLine(c) ? noopValidator(c) : validateCookie(c)), // validate line
|
||||
A.mapWithIndex((i, either) => pipe( // detect errors and return the either
|
||||
either,
|
||||
E.matchW(
|
||||
(l) => pushMessage(`Error in line ${i + 1}: ${l}`, 'warning'),
|
||||
() => E.isRight(either)
|
||||
E.match(
|
||||
(l) => {
|
||||
pushMessage(`Error in line ${i + 1}: ${l}`, 'warning')
|
||||
return either
|
||||
},
|
||||
(_) => either
|
||||
),
|
||||
)),
|
||||
A.filter(Boolean),
|
||||
A.match(
|
||||
() => false,
|
||||
(c) => {
|
||||
pushMessage(`Valid ${c.length} Netscape cookies`, 'info')
|
||||
return true
|
||||
}
|
||||
)
|
||||
A.filter(c => E.isRight(c)), // filter the line who didn't pass the validation
|
||||
A.map(E.getOrElse(() => new Array<string>())), // cast the array of eithers to an array of tokens
|
||||
A.filter(f => f.length > 0), // filter the empty tokens
|
||||
A.map(f => f.join('\t')), // join the tokens in a TAB separated string
|
||||
A.reduce('', (c, n) => `${c}${n}\n`), // reduce all to a single string separated by \n
|
||||
parsed => parsed.length > 0 // if nothing has passed the validation return none
|
||||
? O.some(parsed)
|
||||
: O.none
|
||||
)
|
||||
|
||||
useSubscription(
|
||||
@@ -117,22 +134,17 @@ const CookiesTextField: React.FC = () => {
|
||||
),
|
||||
(cookies) => pipe(
|
||||
cookies,
|
||||
cookies => {
|
||||
setSavedCookies(cookies)
|
||||
return cookies
|
||||
},
|
||||
validateNetscapeCookies,
|
||||
O.fromPredicate(f => f === true),
|
||||
O.match(
|
||||
() => setCookies(''),
|
||||
async () => {
|
||||
() => pushMessage('No valid cookies', 'warning'),
|
||||
async (some) => {
|
||||
pipe(
|
||||
await submitCookies(cookies),
|
||||
await submitCookies(some.trimEnd()),
|
||||
E.match(
|
||||
(l) => pushMessage(`${l}`, 'error'),
|
||||
() => {
|
||||
pushMessage(`Saved Netscape cookies`, 'success')
|
||||
setCookies(flag)
|
||||
pushMessage(`Saved ${some.split('\n').length} Netscape cookies`, 'success')
|
||||
pushMessage('Reload the page to apply the changes', 'info')
|
||||
}
|
||||
)
|
||||
)
|
||||
@@ -142,15 +154,18 @@ const CookiesTextField: React.FC = () => {
|
||||
)
|
||||
|
||||
return (
|
||||
<TextField
|
||||
label="Netscape Cookies"
|
||||
multiline
|
||||
maxRows={20}
|
||||
minRows={4}
|
||||
fullWidth
|
||||
defaultValue={savedCookies}
|
||||
onChange={(e) => cookies$.next(e.currentTarget.value)}
|
||||
/>
|
||||
<>
|
||||
<TextField
|
||||
label="Netscape Cookies"
|
||||
multiline
|
||||
maxRows={20}
|
||||
minRows={4}
|
||||
fullWidth
|
||||
defaultValue={savedCookies}
|
||||
onChange={(e) => cookies$.next(e.currentTarget.value)}
|
||||
/>
|
||||
<Button onClick={deleteCookies}>Delete cookies</Button>
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ const DownloadsGridView: React.FC = () => {
|
||||
const { client } = useRPC()
|
||||
const { pushMessage } = useToast()
|
||||
|
||||
const stop = (r: RPCResult) => r.progress.process_status === ProcessStatus.Completed
|
||||
const stop = (r: RPCResult) => r.progress.process_status === ProcessStatus.COMPLETED
|
||||
? client.clear(r.id)
|
||||
: client.kill(r.id)
|
||||
|
||||
|
||||
@@ -133,7 +133,7 @@ const DownloadsTableView: React.FC = () => {
|
||||
window.open(`${serverAddr}/archive/d/${encoded}?token=${localStorage.getItem('token')}`)
|
||||
}
|
||||
|
||||
const stop = (r: RPCResult) => r.progress.process_status === ProcessStatus.Completed
|
||||
const stop = (r: RPCResult) => r.progress.process_status === ProcessStatus.COMPLETED
|
||||
? client.clear(r.id)
|
||||
: client.kill(r.id)
|
||||
|
||||
|
||||
@@ -37,7 +37,9 @@ const Footer: React.FC = () => {
|
||||
<div style={{ display: 'flex', gap: 4, alignItems: 'center' }}>
|
||||
{/* TODO: make it dynamic */}
|
||||
<Chip label="RPC v3.2.0" variant="outlined" size="small" />
|
||||
<VersionIndicator />
|
||||
<Suspense>
|
||||
<VersionIndicator />
|
||||
</Suspense>
|
||||
</div>
|
||||
<div style={{ display: 'flex', gap: 4, 'alignItems': 'center' }}>
|
||||
<div style={{
|
||||
|
||||
@@ -101,6 +101,7 @@ export default function FormatsGrid({
|
||||
>
|
||||
{format.format_note} - {format.vcodec === 'none' ? format.acodec : format.vcodec}
|
||||
{(format.filesize_approx > 0) ? " (~" + Math.round(format.filesize_approx / 1024 / 1024) + " MiB)" : ""}
|
||||
{format.language}
|
||||
</Button>
|
||||
</Grid>
|
||||
))
|
||||
|
||||
@@ -1,32 +1,9 @@
|
||||
import { Chip, CircularProgress } from '@mui/material'
|
||||
import { useEffect, useState } from 'react'
|
||||
import { useRecoilValue } from 'recoil'
|
||||
import { serverURL } from '../atoms/settings'
|
||||
import { useToast } from '../hooks/toast'
|
||||
import { ytdlpVersionState } from '../atoms/status'
|
||||
|
||||
const VersionIndicator: React.FC = () => {
|
||||
const serverAddr = useRecoilValue(serverURL)
|
||||
|
||||
const [version, setVersion] = useState('')
|
||||
const { pushMessage } = useToast()
|
||||
|
||||
const fetchVersion = async () => {
|
||||
const res = await fetch(`${serverAddr}/api/v1/version`, {
|
||||
headers: {
|
||||
'X-Authentication': localStorage.getItem('token') ?? ''
|
||||
}
|
||||
})
|
||||
|
||||
if (!res.ok) {
|
||||
return pushMessage(await res.text(), 'error')
|
||||
}
|
||||
|
||||
setVersion(await res.json())
|
||||
}
|
||||
|
||||
useEffect(() => {
|
||||
fetchVersion()
|
||||
}, [])
|
||||
const version = useRecoilValue(ytdlpVersionState)
|
||||
|
||||
return (
|
||||
version
|
||||
|
||||
@@ -82,7 +82,9 @@ export class RPCClient {
|
||||
: ''
|
||||
|
||||
const sanitizedArgs = this.argsSanitizer(
|
||||
req.args.replace('-o', '').replace(rename, '')
|
||||
req.args
|
||||
.replace('-o', '')
|
||||
.replace(rename, '')
|
||||
)
|
||||
|
||||
if (req.playlist) {
|
||||
@@ -177,14 +179,14 @@ export class RPCClient {
|
||||
}
|
||||
|
||||
public killLivestream(url: string) {
|
||||
return this.sendHTTP<LiveStreamProgress>({
|
||||
return this.sendHTTP({
|
||||
method: 'Service.KillLivestream',
|
||||
params: [url]
|
||||
})
|
||||
}
|
||||
|
||||
public killAllLivestream() {
|
||||
return this.sendHTTP<LiveStreamProgress>({
|
||||
return this.sendHTTP({
|
||||
method: 'Service.KillAllLivestream',
|
||||
params: []
|
||||
})
|
||||
|
||||
@@ -39,10 +39,11 @@ type DownloadInfo = {
|
||||
}
|
||||
|
||||
export enum ProcessStatus {
|
||||
Pending = 0,
|
||||
Downloading,
|
||||
Completed,
|
||||
Errored,
|
||||
PENDING = 0,
|
||||
DOWNLOADING,
|
||||
COMPLETED,
|
||||
ERRORED,
|
||||
LIVESTREAM,
|
||||
}
|
||||
|
||||
type DownloadProgress = {
|
||||
@@ -81,6 +82,7 @@ export type DLFormat = {
|
||||
vcodec: string
|
||||
acodec: string
|
||||
filesize_approx: number
|
||||
language: string
|
||||
}
|
||||
|
||||
export type DirectoryEntry = {
|
||||
@@ -110,7 +112,7 @@ export enum LiveStreamStatus {
|
||||
}
|
||||
|
||||
export type LiveStreamProgress = Record<string, {
|
||||
Status: LiveStreamStatus
|
||||
WaitTime: string
|
||||
LiveDate: string
|
||||
status: LiveStreamStatus
|
||||
waitTime: string
|
||||
liveDate: string
|
||||
}>
|
||||
@@ -56,14 +56,16 @@ export function isRPCResponse(object: any): object is RPCResponse<any> {
|
||||
|
||||
export function mapProcessStatus(status: ProcessStatus) {
|
||||
switch (status) {
|
||||
case ProcessStatus.Pending:
|
||||
case ProcessStatus.PENDING:
|
||||
return 'Pending'
|
||||
case ProcessStatus.Downloading:
|
||||
case ProcessStatus.DOWNLOADING:
|
||||
return 'Downloading'
|
||||
case ProcessStatus.Completed:
|
||||
case ProcessStatus.COMPLETED:
|
||||
return 'Completed'
|
||||
case ProcessStatus.Errored:
|
||||
case ProcessStatus.ERRORED:
|
||||
return 'Error'
|
||||
case ProcessStatus.LIVESTREAM:
|
||||
return 'Livestream'
|
||||
default:
|
||||
return 'Pending'
|
||||
}
|
||||
|
||||
@@ -101,17 +101,17 @@ const LiveStreamMonitorView: React.FC = () => {
|
||||
>
|
||||
<TableCell>{k}</TableCell>
|
||||
<TableCell align='right'>
|
||||
{mapStatusToChip(progress[k].Status)}
|
||||
{mapStatusToChip(progress[k].status)}
|
||||
</TableCell>
|
||||
<TableCell align='right'>
|
||||
{progress[k].Status === LiveStreamStatus.WAITING
|
||||
? formatMicro(Number(progress[k].WaitTime))
|
||||
{progress[k].status === LiveStreamStatus.WAITING
|
||||
? formatMicro(Number(progress[k].waitTime))
|
||||
: "-"
|
||||
}
|
||||
</TableCell>
|
||||
<TableCell align='right'>
|
||||
{progress[k].Status === LiveStreamStatus.WAITING
|
||||
? new Date(progress[k].LiveDate).toLocaleString()
|
||||
{progress[k].status === LiveStreamStatus.WAITING
|
||||
? new Date(progress[k].liveDate).toLocaleString()
|
||||
: "-"
|
||||
}
|
||||
</TableCell>
|
||||
|
||||
@@ -18,7 +18,7 @@ import {
|
||||
Typography,
|
||||
capitalize
|
||||
} from '@mui/material'
|
||||
import { useEffect, useMemo, useState } from 'react'
|
||||
import { Suspense, useEffect, useMemo, useState } from 'react'
|
||||
import { useRecoilState } from 'recoil'
|
||||
import {
|
||||
Subject,
|
||||
@@ -347,7 +347,9 @@ export default function Settings() {
|
||||
<Typography variant="h6" color="primary" sx={{ mb: 2 }}>
|
||||
Cookies
|
||||
</Typography>
|
||||
<CookiesTextField />
|
||||
<Suspense>
|
||||
<CookiesTextField />
|
||||
</Suspense>
|
||||
</Grid>
|
||||
<Grid>
|
||||
<Stack direction="row">
|
||||
|
||||
16
go.mod
16
go.mod
@@ -1,6 +1,6 @@
|
||||
module github.com/marcopeocchi/yt-dlp-web-ui
|
||||
|
||||
go 1.22
|
||||
go 1.23
|
||||
|
||||
require (
|
||||
github.com/asaskevich/EventBus v0.0.0-20200907212545-49d423059eef
|
||||
@@ -11,11 +11,11 @@ require (
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/gorilla/websocket v1.5.3
|
||||
github.com/reactivex/rxgo/v2 v2.5.0
|
||||
golang.org/x/oauth2 v0.21.0
|
||||
golang.org/x/sync v0.7.0
|
||||
golang.org/x/sys v0.22.0
|
||||
golang.org/x/oauth2 v0.23.0
|
||||
golang.org/x/sync v0.8.0
|
||||
golang.org/x/sys v0.25.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
modernc.org/sqlite v1.31.1
|
||||
modernc.org/sqlite v1.32.0
|
||||
)
|
||||
|
||||
require (
|
||||
@@ -32,9 +32,9 @@ require (
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/stretchr/testify v1.9.0 // indirect
|
||||
github.com/teivah/onecontext v1.3.0 // indirect
|
||||
golang.org/x/crypto v0.25.0 // indirect
|
||||
modernc.org/gc/v3 v3.0.0-20240722195230-4a140ff9c08e // indirect
|
||||
modernc.org/libc v1.55.7 // indirect
|
||||
golang.org/x/crypto v0.26.0 // indirect
|
||||
modernc.org/gc/v3 v3.0.0-20240801135723-a856999a2e4a // indirect
|
||||
modernc.org/libc v1.60.1 // indirect
|
||||
modernc.org/mathutil v1.6.0 // indirect
|
||||
modernc.org/memory v1.8.0 // indirect
|
||||
modernc.org/strutil v1.2.0 // indirect
|
||||
|
||||
55
go.sum
55
go.sum
@@ -17,8 +17,6 @@ github.com/go-chi/chi/v5 v5.1.0 h1:acVI1TYaD+hhedDJ3r54HyA6sExp3HfXq7QWEEY/xMw=
|
||||
github.com/go-chi/chi/v5 v5.1.0/go.mod h1:DslCQbL2OYiznFReuXYUmQ2hGd1aDpCnlMNITLSKoi8=
|
||||
github.com/go-chi/cors v1.2.1 h1:xEC8UT3Rlp2QuWNEr4Fs/c2EAGVKBwy/1vHx3bppil4=
|
||||
github.com/go-chi/cors v1.2.1/go.mod h1:sSbTewc+6wYHBBCW7ytsFSn836hqM7JxpglAy2Vzc58=
|
||||
github.com/go-jose/go-jose/v4 v4.0.3 h1:o8aphO8Hv6RPmH+GfzVuyf7YXSBibp+8YyHdOoDESGo=
|
||||
github.com/go-jose/go-jose/v4 v4.0.3/go.mod h1:NKb5HO1EZccyMpiZNbdUw/14tiXNyUJh188dfnMCAfc=
|
||||
github.com/go-jose/go-jose/v4 v4.0.4 h1:VsjPI33J0SB9vQM6PLmNjoHqMQNGPiZ0rHL7Ni7Q6/E=
|
||||
github.com/go-jose/go-jose/v4 v4.0.4/go.mod h1:NKb5HO1EZccyMpiZNbdUw/14tiXNyUJh188dfnMCAfc=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
|
||||
@@ -61,31 +59,33 @@ github.com/teivah/onecontext v1.3.0/go.mod h1:hoW1nmdPVK/0jrvGtcx8sCKYs2PiS4z0zz
|
||||
go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
|
||||
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30=
|
||||
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M=
|
||||
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw=
|
||||
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.16.0 h1:QX4fJ0Rr5cPQCF7O9lh9Se4pmwfwskqZfq5moyldzic=
|
||||
golang.org/x/mod v0.16.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/mod v0.19.0 h1:fEdghXQSo20giMthA7cd28ZC+jts4amQ3YMXiP5oMQ8=
|
||||
golang.org/x/mod v0.19.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs=
|
||||
golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
|
||||
golang.org/x/oauth2 v0.22.0 h1:BzDx2FehcG7jJwgWLELCdmLuxk2i+x9UDpSiss2u0ZA=
|
||||
golang.org/x/oauth2 v0.22.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
|
||||
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
|
||||
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
|
||||
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
|
||||
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
|
||||
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg=
|
||||
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34=
|
||||
golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.19.0 h1:tfGCXNR1OsFG+sVdLAitlpjAvD/I6dHDKnYrpEZUHkw=
|
||||
golang.org/x/tools v0.19.0/go.mod h1:qoJWxmGSIBmAeriMx19ogtrEPrGtDbPK634QFIcLAhc=
|
||||
golang.org/x/tools v0.23.0 h1:SGsXPZ+2l4JsgaCKkx+FQ9YZ5XEtA1GZYuoDjenLjvg=
|
||||
golang.org/x/tools v0.23.0/go.mod h1:pnu6ufv6vQkll6szChhK3C3L/ruaIv5eBeztNG8wtsI=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||
@@ -95,20 +95,19 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
modernc.org/cc/v4 v4.21.4 h1:3Be/Rdo1fpr8GrQ7IVw9OHtplU4gWbb+wNgeoBMmGLQ=
|
||||
modernc.org/cc/v4 v4.21.4/go.mod h1:HM7VJTZbUCR3rV8EYBi9wxnJ0ZBRiGE5OeGXNA0IsLQ=
|
||||
modernc.org/ccgo/v4 v4.19.2 h1:lwQZgvboKD0jBwdaeVCTouxhxAyN6iawF3STraAal8Y=
|
||||
modernc.org/ccgo/v4 v4.19.2/go.mod h1:ysS3mxiMV38XGRTTcgo0DQTeTmAO4oCmJl1nX9VFI3s=
|
||||
modernc.org/ccgo/v4 v4.20.5 h1:s04akhT2dysD0DFOlv9fkQ6oUTLPYgMnnDk9oaqjszM=
|
||||
modernc.org/ccgo/v4 v4.20.7 h1:skrinQsjxWfvj6nbC3ztZPJy+NuwmB3hV9zX/pthNYQ=
|
||||
modernc.org/ccgo/v4 v4.20.7/go.mod h1:UOkI3JSG2zT4E2ioHlncSOZsXbuDCZLvPi3uMlZT5GY=
|
||||
modernc.org/ccgo/v4 v4.21.0 h1:kKPI3dF7RIag8YcToh5ZwDcVMIv6VGa0ED5cvh0LMW4=
|
||||
modernc.org/fileutil v1.3.0 h1:gQ5SIzK3H9kdfai/5x41oQiKValumqNTDXMvKo62HvE=
|
||||
modernc.org/fileutil v1.3.0/go.mod h1:XatxS8fZi3pS8/hKG2GH/ArUogfxjpEKs3Ku3aK4JyQ=
|
||||
modernc.org/gc/v2 v2.4.1 h1:9cNzOqPyMJBvrUipmynX0ZohMhcxPtMccYgGOJdOiBw=
|
||||
modernc.org/gc/v2 v2.4.1/go.mod h1:wzN5dK1AzVGoH6XOzc3YZ+ey/jPgYHLuVckd62P0GYU=
|
||||
modernc.org/gc/v2 v2.4.3 h1:Ik4ZcMbC7aY4ZDPUhzXVXi7GMub9QcXLTfXn3mWpNw8=
|
||||
modernc.org/gc/v3 v3.0.0-20240722195230-4a140ff9c08e h1:WPC4v0rNIFb2PY+nBBEEKyugPPRHPzUgyN3xZPpGK58=
|
||||
modernc.org/gc/v3 v3.0.0-20240722195230-4a140ff9c08e/go.mod h1:Qz0X07sNOR1jWYCrJMEnbW/X55x206Q7Vt4mz6/wHp4=
|
||||
modernc.org/libc v1.55.3 h1:AzcW1mhlPNrRtjS5sS+eW2ISCgSOLLNyFzRh/V3Qj/U=
|
||||
modernc.org/libc v1.55.3/go.mod h1:qFXepLhz+JjFThQ4kzwzOjA/y/artDeg+pcYnY+Q83w=
|
||||
modernc.org/libc v1.55.7 h1:/5PMGAF3tyZhK72WpoqeLNtgUUpYMrnhT+Gm/5tVDgs=
|
||||
modernc.org/libc v1.55.7/go.mod h1:JXguUpMkbw1gknxspNE9XaG+kk9hDAAnBxpA6KGLiyA=
|
||||
modernc.org/gc/v2 v2.5.0 h1:bJ9ChznK1L1mUtAQtxi0wi5AtAs5jQuw4PrPHO5pb6M=
|
||||
modernc.org/gc/v2 v2.5.0/go.mod h1:wzN5dK1AzVGoH6XOzc3YZ+ey/jPgYHLuVckd62P0GYU=
|
||||
modernc.org/gc/v3 v3.0.0-20240801135723-a856999a2e4a h1:CfbpOLEo2IwNzJdMvE8aiRbPMxoTpgAJeyePh0SmO8M=
|
||||
modernc.org/gc/v3 v3.0.0-20240801135723-a856999a2e4a/go.mod h1:Qz0X07sNOR1jWYCrJMEnbW/X55x206Q7Vt4mz6/wHp4=
|
||||
modernc.org/libc v1.59.9 h1:k+nNDDakwipimgmJ1D9H466LhFeSkaPPycAs1OZiDmY=
|
||||
modernc.org/libc v1.59.9/go.mod h1:EY/egGEU7Ju66eU6SBqCNYaFUDuc4npICkMWnU5EE3A=
|
||||
modernc.org/libc v1.60.1 h1:at373l8IFRTkJIkAU85BIuUoBM4T1b51ds0E1ovPG2s=
|
||||
modernc.org/libc v1.60.1/go.mod h1:xJuobKuNxKH3RUatS7GjR+suWj+5c2K7bi4m/S5arOY=
|
||||
modernc.org/mathutil v1.6.0 h1:fRe9+AmYlaej+64JsEEhoWuAYBkOtQiMEU7n/XgfYi4=
|
||||
modernc.org/mathutil v1.6.0/go.mod h1:Ui5Q9q1TR2gFm0AQRqQUaBWFLAhQpCwNcuhBOSedWPo=
|
||||
modernc.org/memory v1.8.0 h1:IqGTL6eFMaDZZhEWwcREgeMXYwmW83LYW8cROZYkg+E=
|
||||
@@ -117,8 +116,8 @@ modernc.org/opt v0.1.3 h1:3XOZf2yznlhC+ibLltsDGzABUGVx8J6pnFMS3E4dcq4=
|
||||
modernc.org/opt v0.1.3/go.mod h1:WdSiB5evDcignE70guQKxYUl14mgWtbClRi5wmkkTX0=
|
||||
modernc.org/sortutil v1.2.0 h1:jQiD3PfS2REGJNzNCMMaLSp/wdMNieTbKX920Cqdgqc=
|
||||
modernc.org/sortutil v1.2.0/go.mod h1:TKU2s7kJMf1AE84OoiGppNHJwvB753OYfNl2WRb++Ss=
|
||||
modernc.org/sqlite v1.31.1 h1:XVU0VyzxrYHlBhIs1DiEgSl0ZtdnPtbLVy8hSkzxGrs=
|
||||
modernc.org/sqlite v1.31.1/go.mod h1:UqoylwmTb9F+IqXERT8bW9zzOWN8qwAIcLdzeBZs4hA=
|
||||
modernc.org/sqlite v1.32.0 h1:6BM4uGza7bWypsw4fdLRsLxut6bHe4c58VeqjRgST8s=
|
||||
modernc.org/sqlite v1.32.0/go.mod h1:UqoylwmTb9F+IqXERT8bW9zzOWN8qwAIcLdzeBZs4hA=
|
||||
modernc.org/strutil v1.2.0 h1:agBi9dp1I+eOnxXeiZawM8F4LawKv4NzGWSaLfyeNZA=
|
||||
modernc.org/strutil v1.2.0/go.mod h1:/mdcBmfOibveCTBxUl5B5l6W+TTH1FXPLHZE6bTosX0=
|
||||
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
|
||||
|
||||
9
nix/common.nix
Normal file
9
nix/common.nix
Normal file
@@ -0,0 +1,9 @@
|
||||
{ lib }: {
|
||||
version = "v3.1.2";
|
||||
meta = {
|
||||
description = "A terrible web ui for yt-dlp. Designed to be self-hosted.";
|
||||
homepage = "https://github.com/marcopeocchi/yt-dlp-web-ui";
|
||||
license = lib.licenses.mpl20;
|
||||
};
|
||||
}
|
||||
|
||||
9
nix/devShell.nix
Normal file
9
nix/devShell.nix
Normal file
@@ -0,0 +1,9 @@
|
||||
{ inputsFrom ? [ ], mkShell, yt-dlp, nodejs, go }:
|
||||
mkShell {
|
||||
inherit inputsFrom;
|
||||
packages = [
|
||||
yt-dlp
|
||||
nodejs
|
||||
go
|
||||
];
|
||||
}
|
||||
37
nix/frontend.nix
Normal file
37
nix/frontend.nix
Normal file
@@ -0,0 +1,37 @@
|
||||
{ lib
|
||||
, stdenv
|
||||
, nodejs
|
||||
, pnpm
|
||||
}:
|
||||
let common = import ./common.nix { inherit lib; }; in
|
||||
stdenv.mkDerivation (finalAttrs: {
|
||||
pname = "yt-dlp-web-ui-frontend";
|
||||
|
||||
inherit (common) version;
|
||||
|
||||
src = lib.fileset.toSource {
|
||||
root = ../frontend;
|
||||
fileset = ../frontend;
|
||||
};
|
||||
|
||||
buildPhase = ''
|
||||
npm run build
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
mkdir -p $out/dist
|
||||
cp -r dist/* $out/dist
|
||||
'';
|
||||
|
||||
nativeBuildInputs = [
|
||||
nodejs
|
||||
pnpm.configHook
|
||||
];
|
||||
|
||||
pnpmDeps = pnpm.fetchDeps {
|
||||
inherit (finalAttrs) pname version src;
|
||||
hash = "sha256-NvXNDXkuoJ4vGeQA3bOhhc+KLBfke593qK0edcvzWTo=";
|
||||
};
|
||||
|
||||
inherit (common) meta;
|
||||
})
|
||||
215
nix/module.nix
Normal file
215
nix/module.nix
Normal file
@@ -0,0 +1,215 @@
|
||||
packages: { config, lib, pkgs, ... }:
|
||||
let
|
||||
cfg = config.services.yt-dlp-web-ui;
|
||||
inherit (pkgs.stdenv.hostPlatform) system;
|
||||
pkg = packages.${system}.default;
|
||||
in
|
||||
{
|
||||
/*
|
||||
Some notes on the module design:
|
||||
- Usually, you don't map out all of the options like this in attrsets,
|
||||
but due to the software's nonstandard "config file overrides CLI" behavior,
|
||||
we don't want to expose a config file catchall, and as such don't use '-conf'.
|
||||
|
||||
- Notably, '-driver' is missing as a configuration option.
|
||||
This should instead be customized with idiomatic Nix, overriding 'cfg.package' with
|
||||
the desired yt-dlp package.
|
||||
|
||||
- The systemd service has been sandboxed as much as possible. This restricts configuration of
|
||||
data and logs dir. If you really need a custom data and logs dir, use BindPaths (man systemd.exec)
|
||||
*/
|
||||
options.services.yt-dlp-web-ui = {
|
||||
enable = lib.mkEnableOption "yt-dlp-web-ui";
|
||||
package = lib.mkOption {
|
||||
type = lib.types.package;
|
||||
default = pkg;
|
||||
defaultText = lib.literalMD "`packages.default` from the yt-dlp-web-ui flake.";
|
||||
description = ''
|
||||
The yt-dlp-web-ui package to use.
|
||||
'';
|
||||
};
|
||||
|
||||
user = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "yt-dlp-web-ui";
|
||||
description = lib.mdDoc ''
|
||||
User under which yt-dlp-web-ui runs.
|
||||
'';
|
||||
};
|
||||
|
||||
group = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "yt-dlp-web-ui";
|
||||
description = lib.mdDoc ''
|
||||
Group under which yt-dlp-web-ui runs.
|
||||
'';
|
||||
};
|
||||
|
||||
openFirewall = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
default = false;
|
||||
description = lib.mdDoc ''
|
||||
Whether to open the TCP port in the firewall.
|
||||
'';
|
||||
};
|
||||
|
||||
host = lib.mkOption {
|
||||
default = "0.0.0.0";
|
||||
type = lib.types.str;
|
||||
description = lib.mdDoc ''
|
||||
Host where yt-dlp-web-ui will listen at.
|
||||
'';
|
||||
};
|
||||
|
||||
port = lib.mkOption {
|
||||
default = 3033;
|
||||
type = lib.types.port;
|
||||
description = lib.mdDoc ''
|
||||
Port where yt-dlp-web-ui will listen at.
|
||||
'';
|
||||
};
|
||||
|
||||
downloadDir = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = lib.mdDoc ''
|
||||
The directory where yt-dlp-web-ui stores downloads.
|
||||
'';
|
||||
};
|
||||
|
||||
queueSize = lib.mkOption {
|
||||
default = 2;
|
||||
type = lib.types.ints.unsigned; # >= 0
|
||||
description = lib.mdDoc ''
|
||||
Queue size (concurrent downloads).
|
||||
'';
|
||||
};
|
||||
|
||||
logging = lib.mkEnableOption "logging";
|
||||
|
||||
rpcAuth = lib.mkOption {
|
||||
description = lib.mdDoc ''
|
||||
RPC Authentication settings.
|
||||
'';
|
||||
default = { };
|
||||
type = lib.types.submodule {
|
||||
options = {
|
||||
enable = lib.mkEnableOption "RPC authentication";
|
||||
user = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = lib.mdDoc ''
|
||||
Username required for auth.
|
||||
'';
|
||||
};
|
||||
passwordFile = lib.mkOption {
|
||||
type = with lib.types; nullOr str;
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
Path to the file containing the password required for auth.
|
||||
'';
|
||||
};
|
||||
insecurePasswordText = lib.mkOption {
|
||||
type = with lib.types; nullOr str;
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
Raw password required for auth.
|
||||
|
||||
It's strongly recommended to use 'passwordFile' instead of this option.
|
||||
|
||||
**Don't use this option unless you know what you're doing!**.
|
||||
It writes the password to the world-readable Nix store, which is a big security risk.
|
||||
More info: https://wiki.nixos.org/wiki/Comparison_of_secret_managing_schemes
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
};
|
||||
config = lib.mkIf cfg.enable {
|
||||
assertions = [
|
||||
(lib.mkIf cfg.rpcAuth.enable {
|
||||
assertion = lib.xor (cfg.rpcAuth.passwordFile == null) (cfg.rpcAuth.insecurePasswordText == null);
|
||||
message = ''
|
||||
RPC Auth is enabled for yt-dlp-web-ui! Exactly one RPC auth password source must be set!
|
||||
|
||||
Tip: You should set 'services.yt-dlp-web-ui.rpcAuth.passwordfile'!
|
||||
'';
|
||||
})
|
||||
];
|
||||
|
||||
networking.firewall.allowedTCPPorts = lib.mkIf cfg.openFirewall [ cfg.port ];
|
||||
|
||||
users.users = lib.mkIf (cfg.user == "yt-dlp-web-ui") {
|
||||
yt-dlp-web-ui = {
|
||||
inherit (cfg) group;
|
||||
isSystemUser = true;
|
||||
};
|
||||
};
|
||||
|
||||
users.groups = lib.mkIf (cfg.group == "yt-dlp-web-ui") { yt-dlp-web-ui = { }; };
|
||||
|
||||
systemd.services.yt-dlp-web-ui = {
|
||||
description = "yt-dlp-web-ui system service";
|
||||
after = [ "network.target" ];
|
||||
path = [ cfg.package pkgs.tree ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig =
|
||||
rec {
|
||||
ExecStart =
|
||||
let
|
||||
password =
|
||||
if cfg.rpcAuth.passwordFile == null
|
||||
then cfg.rpcAuth.insecurePasswordText
|
||||
else "$(cat ${cfg.rpcAuth.passwordFile})";
|
||||
args = [
|
||||
"-host ${cfg.host}"
|
||||
"-port ${builtins.toString cfg.port}"
|
||||
''-out "${cfg.downloadDir}"''
|
||||
"-qs ${builtins.toString cfg.queueSize}"
|
||||
] ++ (lib.optionals cfg.logging [
|
||||
"-fl"
|
||||
''-lf "/var/log/${LogsDirectory}/yt-dlp-web-ui.log"''
|
||||
]) ++ (lib.optionals cfg.rpcAuth.enable [
|
||||
"-auth"
|
||||
"-user ${cfg.rpcAuth.user}"
|
||||
"-pass ${password}"
|
||||
]);
|
||||
in
|
||||
"${lib.getExe cfg.package} ${lib.concatStringsSep " " args}";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
ProtectSystem = "strict";
|
||||
ProtectHome = "read-only";
|
||||
StateDirectory = "yt-dlp-web-ui";
|
||||
WorkingDirectory = "/var/lib/${StateDirectory}"; # equivalent to the dir above
|
||||
LogsDirectory = "yt-dlp-web-ui";
|
||||
ReadWritePaths = [
|
||||
cfg.downloadDir
|
||||
];
|
||||
BindReadOnlyPaths = [
|
||||
builtins.storeDir
|
||||
# required for youtube DNS lookup
|
||||
"${config.environment.etc."ssl/certs/ca-certificates.crt".source}:/etc/ssl/certs/ca-certificates.crt"
|
||||
] ++ lib.optionals (cfg.rpcAuth.enable && cfg.rpcAuth.passwordFile != null) [
|
||||
cfg.rpcAuth.passwordFile
|
||||
];
|
||||
CapabilityBoundingSet = "";
|
||||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
|
||||
RestrictNamespaces = true;
|
||||
PrivateDevices = true;
|
||||
PrivateUsers = true;
|
||||
ProtectClock = true;
|
||||
ProtectControlGroups = true;
|
||||
ProtectKernelLogs = true;
|
||||
ProtectKernelModules = true;
|
||||
ProtectKernelTunables = true;
|
||||
SystemCallArchitectures = "native";
|
||||
SystemCallFilter = [ "@system-service" "~@privileged" ];
|
||||
RestrictRealtime = true;
|
||||
LockPersonality = true;
|
||||
MemoryDenyWriteExecute = true;
|
||||
ProtectHostname = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
52
nix/server.nix
Normal file
52
nix/server.nix
Normal file
@@ -0,0 +1,52 @@
|
||||
{ yt-dlp-web-ui-frontend, buildGoModule, lib, makeWrapper, yt-dlp, ... }:
|
||||
let
|
||||
fs = lib.fileset;
|
||||
common = import ./common.nix { inherit lib; };
|
||||
in
|
||||
buildGoModule {
|
||||
pname = "yt-dlp-web-ui";
|
||||
inherit (common) version;
|
||||
src = fs.toSource rec {
|
||||
root = ../.;
|
||||
fileset = fs.difference root (fs.unions [
|
||||
### LIST OF FILES TO IGNORE ###
|
||||
# frontend (this is included by the frontend.nix drv instead)
|
||||
../frontend
|
||||
# documentation
|
||||
../examples
|
||||
# docker
|
||||
../Dockerfile
|
||||
../docker-compose.yml
|
||||
# nix
|
||||
./devShell.nix
|
||||
../.envrc
|
||||
./tests
|
||||
# make
|
||||
../Makefile # this derivation does not use the project Makefile
|
||||
# repo commons
|
||||
../.github
|
||||
../README.md
|
||||
../LICENSE.md
|
||||
../.gitignore
|
||||
../.vscode
|
||||
]);
|
||||
};
|
||||
|
||||
# https://github.com/golang/go/issues/44507
|
||||
preBuild = ''
|
||||
cp -r ${yt-dlp-web-ui-frontend} frontend
|
||||
'';
|
||||
|
||||
nativeBuildInputs = [ makeWrapper ];
|
||||
|
||||
postInstall = ''
|
||||
wrapProgram $out/bin/yt-dlp-web-ui \
|
||||
--prefix PATH : ${lib.makeBinPath [ yt-dlp ]}
|
||||
'';
|
||||
|
||||
vendorHash = "sha256-guM/U9DROJMx2ctPKBQis1YRhaf6fKvvwEWgswQKMG0=";
|
||||
|
||||
meta = common.meta // {
|
||||
mainProgram = "yt-dlp-web-ui";
|
||||
};
|
||||
}
|
||||
20
nix/tests/default.nix
Normal file
20
nix/tests/default.nix
Normal file
@@ -0,0 +1,20 @@
|
||||
{ self, pkgs }: {
|
||||
testServiceStarts = pkgs.testers.runNixOSTest (_: {
|
||||
name = "service-starts";
|
||||
nodes = {
|
||||
machine = _: {
|
||||
imports = [
|
||||
self.nixosModules.default
|
||||
];
|
||||
|
||||
services.yt-dlp-web-ui = {
|
||||
enable = true;
|
||||
downloadDir = "/var/lib/yt-dlp-web-ui";
|
||||
};
|
||||
};
|
||||
};
|
||||
testScript = ''
|
||||
machine.wait_for_unit("yt-dlp-web-ui")
|
||||
'';
|
||||
});
|
||||
}
|
||||
@@ -58,6 +58,7 @@ type Format struct {
|
||||
VCodec string `json:"vcodec"`
|
||||
ACodec string `json:"acodec"`
|
||||
Size float32 `json:"filesize_approx"`
|
||||
Language string `json:"language"`
|
||||
}
|
||||
|
||||
// struct representing the response sent to the client
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"io"
|
||||
"log/slog"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
@@ -12,6 +11,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/config"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/internal"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -27,23 +27,24 @@ type LiveStream struct {
|
||||
url string
|
||||
proc *os.Process // used to manually kill the yt-dlp process
|
||||
status int // whether is monitoring or completed
|
||||
log chan []byte // keeps tracks of the process logs while monitoring, not when started
|
||||
done chan *LiveStream // where to signal the completition
|
||||
waitTimeChan chan time.Duration // time to livestream start
|
||||
errors chan error
|
||||
waitTime time.Duration
|
||||
liveDate time.Time
|
||||
|
||||
mq *internal.MessageQueue
|
||||
db *internal.MemoryDB
|
||||
}
|
||||
|
||||
func New(url string, log chan []byte, done chan *LiveStream) *LiveStream {
|
||||
func New(url string, done chan *LiveStream, mq *internal.MessageQueue, db *internal.MemoryDB) *LiveStream {
|
||||
return &LiveStream{
|
||||
url: url,
|
||||
done: done,
|
||||
status: waiting,
|
||||
waitTime: time.Second * 0,
|
||||
log: log,
|
||||
errors: make(chan error),
|
||||
waitTimeChan: make(chan time.Duration),
|
||||
mq: mq,
|
||||
db: db,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -52,8 +53,9 @@ func (l *LiveStream) Start() error {
|
||||
cmd := exec.Command(
|
||||
config.Instance().DownloaderPath,
|
||||
l.url,
|
||||
"--wait-for-video", "10", // wait for the stream to be live and recheck every 10 secs
|
||||
"--wait-for-video", "30", // wait for the stream to be live and recheck every 10 secs
|
||||
"--no-colors", // no ansi color fuzz
|
||||
"--simulate",
|
||||
"--newline",
|
||||
"--paths", config.Instance().DownloadPath,
|
||||
)
|
||||
@@ -65,13 +67,6 @@ func (l *LiveStream) Start() error {
|
||||
}
|
||||
defer stdout.Close()
|
||||
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
l.status = errored
|
||||
return err
|
||||
}
|
||||
defer stderr.Close()
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
l.status = errored
|
||||
return err
|
||||
@@ -82,37 +77,34 @@ func (l *LiveStream) Start() error {
|
||||
|
||||
// Start monitoring when the livestream is goin to be live.
|
||||
// If already live do nothing.
|
||||
doneWaiting := make(chan struct{})
|
||||
go l.monitorStartTime(stdout, doneWaiting)
|
||||
go l.monitorStartTime(stdout)
|
||||
|
||||
go func() {
|
||||
<-doneWaiting
|
||||
l.logFFMpeg(io.MultiReader(stdout, stderr))
|
||||
}()
|
||||
|
||||
// Wait to the yt-dlp+ffmpeg process to finish.
|
||||
// Wait to the simulated download process to finish.
|
||||
cmd.Wait()
|
||||
|
||||
// Set the job as completed and notify the parent the completion.
|
||||
l.status = completed
|
||||
l.done <- l
|
||||
|
||||
// cleanup
|
||||
close(doneWaiting)
|
||||
// Send the started livestream to the message queue! :D
|
||||
p := &internal.Process{
|
||||
Url: l.url,
|
||||
Livestream: true,
|
||||
Params: []string{"--downloader", "ffmpeg", "--no-part"},
|
||||
}
|
||||
l.db.Set(p)
|
||||
l.mq.Publish(p)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LiveStream) monitorStartTime(r io.Reader, doneWait chan struct{}) {
|
||||
func (l *LiveStream) monitorStartTime(r io.Reader) {
|
||||
// yt-dlp shows the time in the stdout
|
||||
scanner := bufio.NewScanner(r)
|
||||
|
||||
defer func() {
|
||||
l.status = inProgress
|
||||
doneWait <- struct{}{}
|
||||
|
||||
close(l.waitTimeChan)
|
||||
close(l.errors)
|
||||
}()
|
||||
|
||||
// however the time to live is not shown in a new line (and atm there's nothing to do about)
|
||||
@@ -164,9 +156,8 @@ func (l *LiveStream) monitorStartTime(r io.Reader, doneWait chan struct{}) {
|
||||
*/
|
||||
for range TRIES {
|
||||
scanner.Scan()
|
||||
line := scanner.Text()
|
||||
|
||||
if strings.Contains(line, "Waiting for") {
|
||||
if strings.Contains(scanner.Text(), "Waiting for") {
|
||||
waitTimeScanner()
|
||||
}
|
||||
}
|
||||
@@ -222,11 +213,3 @@ func parseTimeSpan(timeStr string) (time.Time, error) {
|
||||
|
||||
return start, nil
|
||||
}
|
||||
|
||||
func (l *LiveStream) logFFMpeg(r io.Reader) {
|
||||
scanner := bufio.NewScanner(r)
|
||||
|
||||
for scanner.Scan() {
|
||||
slog.Info("livestream ffmpeg output", slog.String("url", l.url), slog.String("stdout", scanner.Text()))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/config"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/internal"
|
||||
)
|
||||
|
||||
func setupTest() {
|
||||
@@ -15,9 +16,8 @@ func TestLivestream(t *testing.T) {
|
||||
setupTest()
|
||||
|
||||
done := make(chan *LiveStream)
|
||||
log := make(chan []byte)
|
||||
|
||||
ls := New("https://www.youtube.com/watch?v=LSm1daKezcE", log, done)
|
||||
ls := New("https://www.youtube.com/watch?v=LSm1daKezcE", done, &internal.MessageQueue{}, &internal.MemoryDB{})
|
||||
go ls.Start()
|
||||
|
||||
time.AfterFunc(time.Second*20, func() {
|
||||
|
||||
@@ -5,25 +5,28 @@ import (
|
||||
"log/slog"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/config"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/internal"
|
||||
)
|
||||
|
||||
type Monitor struct {
|
||||
db *internal.MemoryDB // where the just started livestream will be published
|
||||
mq *internal.MessageQueue // where the just started livestream will be published
|
||||
streams map[string]*LiveStream // keeps track of the livestreams
|
||||
done chan *LiveStream // to signal individual processes completition
|
||||
logs chan []byte // to signal individual processes completition
|
||||
}
|
||||
|
||||
func NewMonitor() *Monitor {
|
||||
func NewMonitor(mq *internal.MessageQueue, db *internal.MemoryDB) *Monitor {
|
||||
return &Monitor{
|
||||
mq: mq,
|
||||
db: db,
|
||||
streams: make(map[string]*LiveStream),
|
||||
done: make(chan *LiveStream),
|
||||
}
|
||||
}
|
||||
|
||||
// Detect each livestream completition, if done remove it from the monitor.
|
||||
// Detect each livestream completition, if done detach it from the monitor.
|
||||
func (m *Monitor) Schedule() {
|
||||
for l := range m.done {
|
||||
delete(m.streams, l.url)
|
||||
@@ -31,7 +34,7 @@ func (m *Monitor) Schedule() {
|
||||
}
|
||||
|
||||
func (m *Monitor) Add(url string) {
|
||||
ls := New(url, m.logs, m.done)
|
||||
ls := New(url, m.done, m.mq, m.db)
|
||||
|
||||
go ls.Start()
|
||||
m.streams[url] = ls
|
||||
@@ -59,11 +62,7 @@ func (m *Monitor) Status() LiveStreamStatus {
|
||||
// continue
|
||||
// }
|
||||
|
||||
status[k] = struct {
|
||||
Status int
|
||||
WaitTime time.Duration
|
||||
LiveDate time.Time
|
||||
}{
|
||||
status[k] = Status{
|
||||
Status: v.status,
|
||||
WaitTime: v.waitTime,
|
||||
LiveDate: v.liveDate,
|
||||
@@ -111,8 +110,3 @@ func (m *Monitor) Restore() error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Return a fan-in logs channel
|
||||
func (m *Monitor) Logs() <-chan []byte {
|
||||
return m.logs
|
||||
}
|
||||
|
||||
@@ -5,7 +5,7 @@ import "time"
|
||||
type LiveStreamStatus = map[string]Status
|
||||
|
||||
type Status = struct {
|
||||
Status int
|
||||
WaitTime time.Duration
|
||||
LiveDate time.Time
|
||||
Status int `json:"status"`
|
||||
WaitTime time.Duration `json:"waitTime"`
|
||||
LiveDate time.Time `json:"liveDate"`
|
||||
}
|
||||
|
||||
@@ -13,41 +13,57 @@ import (
|
||||
|
||||
// In-Memory Thread-Safe Key-Value Storage with optional persistence
|
||||
type MemoryDB struct {
|
||||
table sync.Map
|
||||
table map[string]*Process
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
func NewMemoryDB() *MemoryDB {
|
||||
return &MemoryDB{
|
||||
table: make(map[string]*Process),
|
||||
}
|
||||
}
|
||||
|
||||
// Get a process pointer given its id
|
||||
func (m *MemoryDB) Get(id string) (*Process, error) {
|
||||
entry, ok := m.table.Load(id)
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
|
||||
entry, ok := m.table[id]
|
||||
if !ok {
|
||||
return nil, errors.New("no process found for the given key")
|
||||
}
|
||||
|
||||
return entry.(*Process), nil
|
||||
return entry, nil
|
||||
}
|
||||
|
||||
// Store a pointer of a process and return its id
|
||||
func (m *MemoryDB) Set(process *Process) string {
|
||||
id := uuid.NewString()
|
||||
|
||||
m.table.Store(id, process)
|
||||
m.mu.Lock()
|
||||
process.Id = id
|
||||
m.table[id] = process
|
||||
m.mu.Unlock()
|
||||
|
||||
return id
|
||||
}
|
||||
|
||||
// Removes a process progress, given the process id
|
||||
func (m *MemoryDB) Delete(id string) {
|
||||
m.table.Delete(id)
|
||||
m.mu.Lock()
|
||||
delete(m.table, id)
|
||||
m.mu.Unlock()
|
||||
}
|
||||
|
||||
func (m *MemoryDB) Keys() *[]string {
|
||||
var running []string
|
||||
|
||||
m.table.Range(func(key, value any) bool {
|
||||
running = append(running, key.(string))
|
||||
return true
|
||||
})
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
|
||||
for id := range m.table {
|
||||
running = append(running, id)
|
||||
}
|
||||
|
||||
return &running
|
||||
}
|
||||
@@ -56,16 +72,17 @@ func (m *MemoryDB) Keys() *[]string {
|
||||
func (m *MemoryDB) All() *[]ProcessResponse {
|
||||
running := []ProcessResponse{}
|
||||
|
||||
m.table.Range(func(key, value any) bool {
|
||||
m.mu.RLock()
|
||||
for k, v := range m.table {
|
||||
running = append(running, ProcessResponse{
|
||||
Id: key.(string),
|
||||
Info: value.(*Process).Info,
|
||||
Progress: value.(*Process).Progress,
|
||||
Output: value.(*Process).Output,
|
||||
Params: value.(*Process).Params,
|
||||
Id: k,
|
||||
Info: v.Info,
|
||||
Progress: v.Progress,
|
||||
Output: v.Output,
|
||||
Params: v.Params,
|
||||
})
|
||||
return true
|
||||
})
|
||||
}
|
||||
m.mu.RUnlock()
|
||||
|
||||
return &running
|
||||
}
|
||||
@@ -81,6 +98,8 @@ func (m *MemoryDB) Persist() error {
|
||||
return errors.Join(errors.New("failed to persist session"), err)
|
||||
}
|
||||
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
session := Session{Processes: *running}
|
||||
|
||||
if err := gob.NewEncoder(fd).Encode(session); err != nil {
|
||||
@@ -103,6 +122,9 @@ func (m *MemoryDB) Restore(mq *MessageQueue) {
|
||||
return
|
||||
}
|
||||
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
for _, proc := range session.Processes {
|
||||
restored := &Process{
|
||||
Id: proc.Id,
|
||||
@@ -113,7 +135,7 @@ func (m *MemoryDB) Restore(mq *MessageQueue) {
|
||||
Params: proc.Params,
|
||||
}
|
||||
|
||||
m.table.Store(proc.Id, restored)
|
||||
m.table[proc.Id] = restored
|
||||
|
||||
if restored.Progress.Status != StatusCompleted {
|
||||
mq.Publish(restored)
|
||||
|
||||
@@ -63,13 +63,17 @@ func (m *MessageQueue) downloadConsumer() {
|
||||
)
|
||||
|
||||
if p.Progress.Status != StatusCompleted {
|
||||
p.Start()
|
||||
slog.Info("started process",
|
||||
slog.String("bus", queueName),
|
||||
slog.String("id", p.getShortId()),
|
||||
)
|
||||
if p.Livestream {
|
||||
// livestreams have higher priorty and they ignore the semaphore
|
||||
go p.Start()
|
||||
} else {
|
||||
p.Start()
|
||||
}
|
||||
}
|
||||
|
||||
slog.Info("started process",
|
||||
slog.String("bus", queueName),
|
||||
slog.String("id", p.getShortId()),
|
||||
)
|
||||
}, false)
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package internal
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
@@ -13,15 +14,12 @@ import (
|
||||
"sync"
|
||||
"syscall"
|
||||
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/cli"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/config"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/rx"
|
||||
)
|
||||
|
||||
const template = `download:
|
||||
@@ -40,13 +38,14 @@ const (
|
||||
|
||||
// Process descriptor
|
||||
type Process struct {
|
||||
Id string
|
||||
Url string
|
||||
Params []string
|
||||
Info DownloadInfo
|
||||
Progress DownloadProgress
|
||||
Output DownloadOutput
|
||||
proc *os.Process
|
||||
Id string
|
||||
Url string
|
||||
Livestream bool
|
||||
Params []string
|
||||
Info DownloadInfo
|
||||
Progress DownloadProgress
|
||||
Output DownloadOutput
|
||||
proc *os.Process
|
||||
}
|
||||
|
||||
// Starts spawns/forks a new yt-dlp process and parse its stdout.
|
||||
@@ -101,81 +100,102 @@ func (p *Process) Start() {
|
||||
|
||||
params := append(baseParams, p.Params...)
|
||||
|
||||
// ----------------- main block ----------------- //
|
||||
slog.Info("requesting download", slog.String("url", p.Url), slog.Any("params", params))
|
||||
|
||||
cmd := exec.Command(config.Instance().DownloaderPath, params...)
|
||||
cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
|
||||
|
||||
r, err := cmd.StdoutPipe()
|
||||
stdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
slog.Error(
|
||||
"failed to connect to stdout",
|
||||
slog.String("err", err.Error()),
|
||||
)
|
||||
slog.Error("failed to get a stdout pipe", slog.Any("err", err))
|
||||
panic(err)
|
||||
}
|
||||
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
slog.Error("failed to get a stderr pipe", slog.Any("err", err))
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
slog.Error(
|
||||
"failed to start yt-dlp process",
|
||||
slog.String("err", err.Error()),
|
||||
)
|
||||
slog.Error("failed to start yt-dlp process", slog.Any("err", err))
|
||||
panic(err)
|
||||
}
|
||||
|
||||
p.proc = cmd.Process
|
||||
|
||||
// --------------- progress block --------------- //
|
||||
var (
|
||||
sourceChan = make(chan []byte)
|
||||
doneChan = make(chan struct{})
|
||||
)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer func() {
|
||||
stdout.Close()
|
||||
p.Complete()
|
||||
cancel()
|
||||
}()
|
||||
|
||||
// spawn a goroutine that does the dirty job of parsing the stdout
|
||||
// filling the channel with as many stdout line as yt-dlp produces (producer)
|
||||
logs := make(chan []byte)
|
||||
go produceLogs(stdout, logs)
|
||||
go p.consumeLogs(ctx, logs)
|
||||
|
||||
go p.detectYtDlpErrors(stderr)
|
||||
|
||||
cmd.Wait()
|
||||
}
|
||||
|
||||
func produceLogs(r io.Reader, logs chan<- []byte) {
|
||||
go func() {
|
||||
scan := bufio.NewScanner(r)
|
||||
scanner := bufio.NewScanner(r)
|
||||
|
||||
defer func() {
|
||||
r.Close()
|
||||
p.Complete()
|
||||
|
||||
doneChan <- struct{}{}
|
||||
|
||||
close(sourceChan)
|
||||
close(doneChan)
|
||||
}()
|
||||
|
||||
for scan.Scan() {
|
||||
sourceChan <- scan.Bytes()
|
||||
for scanner.Scan() {
|
||||
logs <- scanner.Bytes()
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Slows down the unmarshal operation to every 500ms
|
||||
go func() {
|
||||
rx.Sample(time.Millisecond*500, sourceChan, doneChan, func(event []byte) {
|
||||
var progress ProgressTemplate
|
||||
|
||||
if err := json.Unmarshal(event, &progress); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
p.Progress = DownloadProgress{
|
||||
Status: StatusDownloading,
|
||||
Percentage: progress.Percentage,
|
||||
Speed: progress.Speed,
|
||||
ETA: progress.Eta,
|
||||
}
|
||||
|
||||
slog.Info("progress",
|
||||
func (p *Process) consumeLogs(ctx context.Context, logs <-chan []byte) {
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
slog.Info("detaching from yt-dlp stdout",
|
||||
slog.String("id", p.getShortId()),
|
||||
slog.String("url", p.Url),
|
||||
slog.String("percentage", progress.Percentage),
|
||||
)
|
||||
})
|
||||
}()
|
||||
return
|
||||
case entry := <-logs:
|
||||
p.parseLogEntry(entry)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ------------- end progress block ------------- //
|
||||
cmd.Wait()
|
||||
func (p *Process) parseLogEntry(entry []byte) {
|
||||
var progress ProgressTemplate
|
||||
|
||||
if err := json.Unmarshal(entry, &progress); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
p.Progress = DownloadProgress{
|
||||
Status: StatusDownloading,
|
||||
Percentage: progress.Percentage,
|
||||
Speed: progress.Speed,
|
||||
ETA: progress.Eta,
|
||||
}
|
||||
|
||||
slog.Info("progress",
|
||||
slog.String("id", p.getShortId()),
|
||||
slog.String("url", p.Url),
|
||||
slog.String("percentage", progress.Percentage),
|
||||
)
|
||||
}
|
||||
|
||||
func (p *Process) detectYtDlpErrors(r io.Reader) {
|
||||
scanner := bufio.NewScanner(r)
|
||||
|
||||
for scanner.Scan() {
|
||||
slog.Error("yt-dlp process error",
|
||||
slog.String("id", p.getShortId()),
|
||||
slog.String("url", p.Url),
|
||||
slog.String("err", scanner.Text()),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Keep process in the memoryDB but marks it as complete
|
||||
@@ -220,6 +240,7 @@ func (p *Process) Kill() error {
|
||||
}
|
||||
|
||||
// Returns the available format for this URL
|
||||
//
|
||||
// TODO: Move out from process.go
|
||||
func (p *Process) GetFormats() (DownloadFormats, error) {
|
||||
cmd := exec.Command(config.Instance().DownloaderPath, p.Url, "-J")
|
||||
@@ -230,6 +251,12 @@ func (p *Process) GetFormats() (DownloadFormats, error) {
|
||||
return DownloadFormats{}, err
|
||||
}
|
||||
|
||||
slog.Info(
|
||||
"retrieving metadata",
|
||||
slog.String("caller", "getFormats"),
|
||||
slog.String("url", p.Url),
|
||||
)
|
||||
|
||||
info := DownloadFormats{URL: p.Url}
|
||||
best := Format{}
|
||||
|
||||
@@ -240,18 +267,6 @@ func (p *Process) GetFormats() (DownloadFormats, error) {
|
||||
|
||||
wg.Add(2)
|
||||
|
||||
log.Println(
|
||||
cli.BgRed, "Metadata", cli.Reset,
|
||||
cli.BgBlue, "Formats", cli.Reset,
|
||||
p.Url,
|
||||
)
|
||||
|
||||
slog.Info(
|
||||
"retrieving metadata",
|
||||
slog.String("caller", "getFormats"),
|
||||
slog.String("url", p.Url),
|
||||
)
|
||||
|
||||
go func() {
|
||||
decodingError = json.Unmarshal(stdout, &info)
|
||||
wg.Done()
|
||||
|
||||
@@ -26,9 +26,13 @@ func ApplyRouter(args *ContainerArgs) func(chi.Router) {
|
||||
r.Use(openid.Middleware)
|
||||
}
|
||||
r.Post("/exec", h.Exec())
|
||||
r.Post("/execPlaylist", h.ExecPlaylist())
|
||||
r.Post("/execLivestream", h.ExecLivestream())
|
||||
r.Get("/running", h.Running())
|
||||
r.Get("/version", h.GetVersion())
|
||||
r.Get("/cookies", h.GetCookies())
|
||||
r.Post("/cookies", h.SetCookies())
|
||||
r.Delete("/cookies", h.DeleteCookies())
|
||||
r.Post("/template", h.AddTemplate())
|
||||
r.Get("/template/all", h.GetTemplates())
|
||||
r.Delete("/template/{id}", h.DeleteTemplate())
|
||||
|
||||
@@ -41,6 +41,51 @@ func (h *Handler) Exec() http.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) ExecPlaylist() http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
defer r.Body.Close()
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
var req internal.DownloadRequest
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
|
||||
err := h.service.ExecPlaylist(req)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
if err := json.NewEncoder(w).Encode("ok"); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) ExecLivestream() http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
defer r.Body.Close()
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
var req internal.DownloadRequest
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
|
||||
h.service.ExecLivestream(req)
|
||||
|
||||
err := json.NewEncoder(w).Encode("ok")
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) Running() http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
defer r.Body.Close()
|
||||
@@ -60,6 +105,27 @@ func (h *Handler) Running() http.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) GetCookies() http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
cookies, err := h.service.GetCookies(r.Context())
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
res := &internal.SetCookiesRequest{
|
||||
Cookies: string(cookies),
|
||||
}
|
||||
|
||||
if err := json.NewEncoder(w).Encode(res); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) SetCookies() http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
defer r.Body.Close()
|
||||
@@ -87,6 +153,23 @@ func (h *Handler) SetCookies() http.HandlerFunc {
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) DeleteCookies() http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
err := h.service.SetCookies(r.Context(), "")
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
err = json.NewEncoder(w).Encode("ok")
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) AddTemplate() http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
defer r.Body.Close()
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"time"
|
||||
@@ -11,12 +12,14 @@ import (
|
||||
"github.com/google/uuid"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/config"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/internal"
|
||||
"github.com/marcopeocchi/yt-dlp-web-ui/server/internal/livestream"
|
||||
)
|
||||
|
||||
type Service struct {
|
||||
mdb *internal.MemoryDB
|
||||
db *sql.DB
|
||||
mq *internal.MessageQueue
|
||||
lm *livestream.Monitor
|
||||
}
|
||||
|
||||
func (s *Service) Exec(req internal.DownloadRequest) (string, error) {
|
||||
@@ -35,15 +38,39 @@ func (s *Service) Exec(req internal.DownloadRequest) (string, error) {
|
||||
return id, nil
|
||||
}
|
||||
|
||||
func (s *Service) ExecPlaylist(req internal.DownloadRequest) error {
|
||||
return internal.PlaylistDetect(req, s.mq, s.mdb)
|
||||
}
|
||||
|
||||
func (s *Service) ExecLivestream(req internal.DownloadRequest) {
|
||||
s.lm.Add(req.URL)
|
||||
}
|
||||
|
||||
func (s *Service) Running(ctx context.Context) (*[]internal.ProcessResponse, error) {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil, errors.New("context cancelled")
|
||||
return nil, context.Canceled
|
||||
default:
|
||||
return s.mdb.All(), nil
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) GetCookies(ctx context.Context) ([]byte, error) {
|
||||
fd, err := os.Open("cookies.txt")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
defer fd.Close()
|
||||
|
||||
cookies, err := io.ReadAll(fd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return cookies, nil
|
||||
}
|
||||
|
||||
func (s *Service) SetCookies(ctx context.Context, cookies string) error {
|
||||
fd, err := os.Create("cookies.txt")
|
||||
if err != nil {
|
||||
|
||||
@@ -1,31 +0,0 @@
|
||||
package rx
|
||||
|
||||
import "time"
|
||||
|
||||
// ReactiveX inspired sample function.
|
||||
//
|
||||
// Debounce emits the most recently emitted value from the source
|
||||
// withing the timespan set by the span time.Duration
|
||||
//
|
||||
// Soon it will be deprecated since it doesn't add anything useful.
|
||||
// (It lowers the CPU usage by a negligible margin)
|
||||
func Sample(span time.Duration, source chan []byte, done chan struct{}, fn func(e []byte)) {
|
||||
var (
|
||||
item []byte
|
||||
ticker = time.NewTicker(span)
|
||||
)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
if item != nil {
|
||||
fn(item)
|
||||
}
|
||||
case <-source:
|
||||
item = <-source
|
||||
case <-done:
|
||||
ticker.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -51,11 +51,13 @@ type serverConfig struct {
|
||||
mdb *internal.MemoryDB
|
||||
db *sql.DB
|
||||
mq *internal.MessageQueue
|
||||
lm *livestream.Monitor
|
||||
}
|
||||
|
||||
func RunBlocking(cfg *RunConfig) {
|
||||
var mdb internal.MemoryDB
|
||||
mdb := internal.NewMemoryDB()
|
||||
|
||||
// ---- LOGGING ---------------------------------------------------
|
||||
logWriters := []io.Writer{
|
||||
os.Stdout,
|
||||
logging.NewObservableLogger(), // for web-ui
|
||||
@@ -84,6 +86,7 @@ func RunBlocking(cfg *RunConfig) {
|
||||
|
||||
// make the new logger the default one with all the new writers
|
||||
slog.SetDefault(logger)
|
||||
// ----------------------------------------------------------------
|
||||
|
||||
db, err := sql.Open("sqlite", cfg.DBPath)
|
||||
if err != nil {
|
||||
@@ -99,21 +102,25 @@ func RunBlocking(cfg *RunConfig) {
|
||||
panic(err)
|
||||
}
|
||||
mq.SetupConsumers()
|
||||
|
||||
go mdb.Restore(mq)
|
||||
|
||||
lm := livestream.NewMonitor(mq, mdb)
|
||||
go lm.Schedule()
|
||||
go lm.Restore()
|
||||
|
||||
srv := newServer(serverConfig{
|
||||
frontend: cfg.App,
|
||||
swagger: cfg.Swagger,
|
||||
host: cfg.Host,
|
||||
port: cfg.Port,
|
||||
mdb: &mdb,
|
||||
mdb: mdb,
|
||||
mq: mq,
|
||||
db: db,
|
||||
lm: lm,
|
||||
})
|
||||
|
||||
go gracefulShutdown(srv, &mdb)
|
||||
go autoPersist(time.Minute*5, &mdb)
|
||||
go gracefulShutdown(srv, mdb)
|
||||
go autoPersist(time.Minute*5, mdb)
|
||||
|
||||
var (
|
||||
network = "tcp"
|
||||
@@ -140,18 +147,14 @@ func RunBlocking(cfg *RunConfig) {
|
||||
}
|
||||
|
||||
func newServer(c serverConfig) *http.Server {
|
||||
lm := livestream.NewMonitor()
|
||||
go lm.Schedule()
|
||||
go lm.Restore()
|
||||
|
||||
go func() {
|
||||
for {
|
||||
lm.Persist()
|
||||
c.lm.Persist()
|
||||
time.Sleep(time.Minute * 5)
|
||||
}
|
||||
}()
|
||||
|
||||
service := ytdlpRPC.Container(c.mdb, c.mq, lm)
|
||||
service := ytdlpRPC.Container(c.mdb, c.mq, c.lm)
|
||||
rpc.Register(service)
|
||||
|
||||
r := chi.NewRouter()
|
||||
@@ -236,6 +239,7 @@ func gracefulShutdown(srv *http.Server, db *internal.MemoryDB) {
|
||||
|
||||
defer func() {
|
||||
db.Persist()
|
||||
|
||||
stop()
|
||||
srv.Shutdown(context.Background())
|
||||
}()
|
||||
|
||||
Reference in New Issue
Block a user