Compare commits

..

68 Commits

Author SHA1 Message Date
ed
061db3906d v0.11.36 2021-07-11 06:39:58 +02:00
ed
fd7df5c952 v0.11.35 2021-07-11 06:22:56 +02:00
ed
a270019147 easier to tell youre trying to watch a video that firefox cant deal with 2021-07-11 06:21:25 +02:00
ed
55e0209901 add video-player keybinds 2021-07-11 06:12:24 +02:00
ed
2b255fbbed add in-gallery video playback 2021-07-11 03:25:46 +02:00
ed
8a2345a0fb top of the sandwich fell off 2021-07-11 02:06:18 +02:00
ed
bfa9f535aa more context in exceptions 2021-07-11 01:59:07 +02:00
ed
f757623ad8 make bdmv thumbnails 2021-07-09 20:09:32 +02:00
ed
3c7465e268 option to disable thumbcache eviction 2021-07-09 19:55:17 +02:00
ed
108665fc4f v0.11.34 2021-07-09 17:12:21 +02:00
ed
ed519c9138 add performance notes 2021-07-09 17:10:37 +02:00
ed
2dd2e2c57e discard logs in mpw 2021-07-09 17:01:11 +02:00
ed
6c3a976222 scale max-clients to mp-workers 2021-07-09 16:48:02 +02:00
ed
80cc26bd95 fix max-client limit 2021-07-09 16:33:11 +02:00
ed
970fb84fd8 hex looks better 2021-07-09 16:11:33 +02:00
ed
20cbcf6931 logging + shutdown cleanup 2021-07-09 16:07:16 +02:00
ed
8fcde2a579 move tcp accept into mp-worker 2021-07-09 15:49:36 +02:00
ed
b32d1f8ad3 make ?stack work anywhere 2021-07-09 13:46:42 +02:00
ed
03513e0cb1 effectively pointless but cool 2021-07-09 03:41:44 +02:00
ed
e041a2b197 fix centos7 support 2021-07-08 23:35:28 +02:00
ed
d7d625be2a v0.11.33 2021-07-07 10:45:47 +02:00
ed
4121266678 v0.11.32 2021-07-06 21:58:03 +02:00
ed
22971a6be4 up2k-cli: add turbo button 2021-07-06 21:43:07 +02:00
ed
efbf8d7e0d better handling of invalid requests 2021-07-06 01:03:09 +02:00
ed
397396ea4a apply -nw to PUT uploads too 2021-07-06 00:49:39 +02:00
ed
e59b077c21 announce the rotates 2021-07-06 00:43:37 +02:00
ed
4bc39f3084 add logrotate 2021-07-06 00:23:51 +02:00
ed
21c3570786 detect more recursive symlinks 2021-07-05 23:50:03 +02:00
ed
2f85c1fb18 add logging to file 2021-07-05 23:30:33 +02:00
ed
1e27a4c2df make thumb-dir.txt unretrievable 2021-07-05 00:21:33 +02:00
ed
456f575637 v0.11.31 2021-07-04 16:44:29 +02:00
ed
51546c9e64 add missing -nw check 2021-07-04 16:10:20 +02:00
ed
83b4b70ef4 add keepalive handshakes 2021-07-04 16:04:26 +02:00
ed
a5120d4f6f parallelize handshakes 2021-07-04 01:48:01 +02:00
ed
c95941e14f add testimonials, drop bad idea 2021-07-04 00:32:29 +02:00
ed
0dd531149d good 2021-07-03 18:11:52 +02:00
ed
67da1b5219 add ideas 2021-07-03 17:29:49 +02:00
ed
919bd16437 add hls notes 2021-07-03 01:32:36 +02:00
ed
ecead109ab v0.11.30 2021-07-01 22:27:19 +02:00
ed
765294c263 ignore dupe-chunk warnings; handshake takes care of it 2021-07-01 20:22:12 +02:00
ed
d6b5351207 add cachebuster because chrome ignores no-cache 2021-07-01 20:10:02 +02:00
ed
a2009bcc6b up2k-cli: recover from tcp/dns issues on upload 2021-07-01 00:52:09 +02:00
ed
12709a8a0a up2k-cli: recover from antivirus yanking files mid-read 2021-07-01 00:11:40 +02:00
ed
c055baefd2 up2k-client: maybe fix busy-tab (assumed linear progress) 2021-06-30 23:17:07 +02:00
ed
56522599b5 up2k-client: way faster init on large filedrops 2021-06-30 21:26:13 +02:00
ed
664f53b75d chrome gets stuck iterating over aux.h on win10 2021-06-30 19:26:06 +02:00
ed
87200d9f10 make -nw apply to more stuff 2021-06-30 19:23:45 +02:00
ed
5c3d0b6520 catch errors in onloads 2021-06-30 17:09:37 +02:00
ed
bd49979f4a v0.11.29 2021-06-30 01:51:57 +02:00
ed
7e606cdd9f make search rate-control less visually confusing 2021-06-30 01:44:25 +02:00
ed
8b4b7fa794 allow opening tree nodes in a new tab 2021-06-30 01:08:20 +02:00
ed
05345ddf8b add per-connection request counting 2021-06-30 01:00:00 +02:00
ed
66adb470ad optional progressbar tint 2021-06-30 00:55:57 +02:00
ed
e15c8fd146 add upload pause 2021-06-30 00:34:33 +02:00
ed
0f09b98a39 scan for additional folder thumbnails 2021-06-30 00:19:39 +02:00
ed
b4d6f4e24d american-friendly upload limits (allow additional bypass using manual text entry) 2021-06-30 00:11:23 +02:00
ed
3217fa625b more todo 2021-06-29 23:59:15 +02:00
ed
e719ff8a47 make sfx kipu-proof 2021-06-29 23:53:57 +02:00
ed
9fcf528d45 update readme 2021-06-29 23:32:21 +02:00
ed
1ddbf5a158 update todo 2021-06-29 23:00:28 +02:00
ed
64bf4574b0 add todo maybe 2021-06-28 20:38:59 +02:00
ed
5649d26077 v0.11.28 2021-06-28 15:36:13 +02:00
ed
92f923effe hotkey for adjusting tree width 2021-06-28 15:34:10 +02:00
ed
0d46d548b9 fix panic when zero accounts 2021-06-28 15:20:40 +02:00
ed
062df3f0c3 point control-panel link to / 2021-06-27 00:52:15 +02:00
ed
789fb53b8e tweaks 2021-06-27 00:49:28 +02:00
ed
351db5a18f ah yes trailing whitespace as markup my good old friend we meet again 2021-06-27 00:20:42 +02:00
ed
aabbd271c8 add debian howto 2021-06-27 00:19:37 +02:00
36 changed files with 1335 additions and 622 deletions

View File

@@ -20,8 +20,10 @@ turn your phone or raspi into a portable file server with resumable uploads/down
* top
* [quickstart](#quickstart)
* [on debian](#on-debian)
* [notes](#notes)
* [status](#status)
* [testimonials](#testimonials)
* [bugs](#bugs)
* [general bugs](#general-bugs)
* [not my bugs](#not-my-bugs)
@@ -44,6 +46,7 @@ turn your phone or raspi into a portable file server with resumable uploads/down
* [browser support](#browser-support)
* [client examples](#client-examples)
* [up2k](#up2k)
* [performance](#performance)
* [dependencies](#dependencies)
* [optional dependencies](#optional-dependencies)
* [install recommended deps](#install-recommended-deps)
@@ -68,6 +71,7 @@ some recommended options:
* `-e2dsa` enables general file indexing, see [search configuration](#search-configuration)
* `-e2ts` enables audio metadata indexing (needs either FFprobe or mutagen), see [optional dependencies](#optional-dependencies)
* `-v /mnt/music:/music:r:afoo -a foo:bar` shares `/mnt/music` as `/music`, `r`eadable by anyone, with user `foo` as `a`dmin (read/write), password `bar`
* the syntax is `-v src:dst:perm:perm:...` so local-path, url-path, and one or more permissions to set
* replace `:r:afoo` with `:rfoo` to only make the folder readable by `foo` and nobody else
* in addition to `r`ead and `a`dmin, `w`rite makes a folder write-only, so cannot list/access files in it
* `--ls '**,*,ln,p,r'` to crash on startup if any of the volumes contain a symlink which point outside the volume, as that could give users unintended access
@@ -77,6 +81,19 @@ you may also want these, especially on servers:
* [contrib/nginx/copyparty.conf](contrib/nginx/copyparty.conf) to reverse-proxy behind nginx (for better https)
### on debian
recommended steps to enable audio metadata and thumbnails (from images and videos):
* as root, run the following:
`apt install python3 python3-pip python3-dev ffmpeg`
* then, as the user which will be running copyparty (so hopefully not root), run this:
`python3 -m pip install --user -U Pillow pillow-avif-plugin`
(skipped `pyheif-pillow-opener` because apparently debian is too old to build it)
## notes
general:
@@ -128,6 +145,13 @@ summary: all planned features work! now please enjoy the bloatening
* ☑ editor (sure why not)
## testimonials
small collection of user feedback
`good enough`, `surprisingly correct`, `certified good software`, `just works`, `why`
# bugs
* Windows: python 3.7 and older cannot read tags with ffprobe, so use mutagen or upgrade
@@ -139,6 +163,8 @@ summary: all planned features work! now please enjoy the bloatening
* all volumes must exist / be available on startup; up2k (mtp especially) gets funky otherwise
* cannot mount something at `/d1/d2/d3` unless `d2` exists inside `d1`
* dupe files will not have metadata (audio tags etc) displayed in the file listing
* because they don't get `up` entries in the db (probably best fix) and `tx_browser` does not `lstat`
* probably more, pls let me know
## not my bugs
@@ -174,13 +200,21 @@ the browser has the following hotkeys
* `G` toggle list / grid view
* `T` toggle thumbnails / icons
* when playing audio:
* `0..9` jump to 10%..90%
* `U/O` skip 10sec back/forward
* `J/L` prev/next song
* `U/O` skip 10sec back/forward
* `0..9` jump to 10%..90%
* `P` play/pause (also starts playing the folder)
* when viewing images / playing videos:
* `J/L, Left/Right` prev/next file
* `Home/End` first/last file
* `U/O` skip 10sec back/forward
* `P/K/Space` play/pause video
* `Esc` close viewer
* when tree-sidebar is open:
* `A/D` adjust tree width
* in the grid view:
* `S` toggle multiselect
* `A/D` zoom
* shift+`A/D` zoom
## tree-mode
@@ -467,6 +501,23 @@ quick outline of the up2k protocol, see [uploading](#uploading) for the web-clie
* client does another handshake with the hashlist; server replies with OK or a list of chunks to reupload
# performance
defaults are good for most cases, don't mind the `cannot efficiently use multiple CPU cores` message, it's very unlikely to be a problem
below are some tweaks roughly ordered by usefulness:
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
* `--http-only` or `--https-only` (unless you want to support both protocols) will reduce the delay before a new connection is established
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
* `--no-hash` when indexing a networked disk if you don't care about the actual filehashes and only want the names/tags searchable
* `-j` enables multiprocessing (actual multithreading) and can make copyparty perform better in cpu-intensive workloads, for example:
* huge amount of short-lived connections
* really heavy traffic (downloads/uploads)
...however it adds an overhead to internal communication so it might be a net loss, see if it works 4 u
# dependencies
* `jinja2` (is built into the SFX)
@@ -596,13 +647,15 @@ in the `scripts` folder:
roughly sorted by priority
* readme.md as epilogue
* single sha512 across all up2k chunks? maybe
* reduce up2k roundtrips
* start from a chunk index and just go
* terminate client on bad data
* logging to file
discarded ideas
* single sha512 across all up2k chunks?
* crypto.subtle cannot into streaming, would have to use hashwasm, expensive
* separate sqlite table per tag
* performance fixed by skipping some indexes (`+mt.k`)
* audio fingerprinting
@@ -617,3 +670,6 @@ discarded ideas
* nah
* look into android thumbnail cache file format
* absolutely not
* indexedDB for hashes, cfg enable/clear/sz, 2gb avail, ~9k for 1g, ~4k for 100m, 500k items before autoeviction
* blank hashlist when up-ok to skip handshake
* too many confusing side-effects

View File

@@ -1,7 +1,15 @@
# when running copyparty behind a reverse-proxy,
# make sure that copyparty allows at least as many clients as the proxy does,
# so run copyparty with -nc 512 if your nginx has the default limits
# (worker_processes 1, worker_connections 512)
# when running copyparty behind a reverse proxy,
# the following arguments are recommended:
#
# -nc 512 important, see next paragraph
# --http-only lower latency on initial connection
# -i 127.0.0.1 only accept connections from nginx
#
# -nc must match or exceed the webserver's max number of concurrent clients;
# nginx default is 512 (worker_processes 1, worker_connections 512)
#
# you may also consider adding -j0 for CPU-intensive configurations
# (not that i can really think of any good examples)
upstream cpp {
server 127.0.0.1:3923;

View File

@@ -9,6 +9,9 @@ import os
PY2 = sys.version_info[0] == 2
if PY2:
sys.dont_write_bytecode = True
unicode = unicode
else:
unicode = str
WINDOWS = False
if platform.system() == "Windows":

View File

@@ -20,7 +20,7 @@ import threading
import traceback
from textwrap import dedent
from .__init__ import E, WINDOWS, VT100, PY2
from .__init__ import E, WINDOWS, VT100, PY2, unicode
from .__version__ import S_VERSION, S_BUILD_DT, CODENAME
from .svchub import SvcHub
from .util import py_desc, align_tab, IMPLICATIONS, alltrace
@@ -31,6 +31,8 @@ try:
except:
HAVE_SSL = False
printed = ""
class RiceFormatter(argparse.HelpFormatter):
def _get_help_string(self, action):
@@ -61,8 +63,15 @@ class Dodge11874(RiceFormatter):
super(Dodge11874, self).__init__(*args, **kwargs)
def lprint(*a, **ka):
global printed
printed += " ".join(unicode(x) for x in a) + ka.get("end", "\n")
print(*a, **ka)
def warn(msg):
print("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg))
lprint("\033[1mwarning:\033[0;33m {}\033[0m\n".format(msg))
def ensure_locale():
@@ -73,7 +82,7 @@ def ensure_locale():
]:
try:
locale.setlocale(locale.LC_ALL, x)
print("Locale:", x)
lprint("Locale:", x)
break
except:
continue
@@ -94,7 +103,7 @@ def ensure_cert():
try:
if filecmp.cmp(cert_cfg, cert_insec):
print(
lprint(
"\033[33m using default TLS certificate; https will be insecure."
+ "\033[36m\n certificate location: {}\033[0m\n".format(cert_cfg)
)
@@ -123,7 +132,7 @@ def configure_ssl_ver(al):
if "help" in sslver:
avail = [terse_sslver(x[6:]) for x in flags]
avail = " ".join(sorted(avail) + ["all"])
print("\navailable ssl/tls versions:\n " + avail)
lprint("\navailable ssl/tls versions:\n " + avail)
sys.exit(0)
al.ssl_flags_en = 0
@@ -143,7 +152,7 @@ def configure_ssl_ver(al):
for k in ["ssl_flags_en", "ssl_flags_de"]:
num = getattr(al, k)
print("{}: {:8x} ({})".format(k, num, num))
lprint("{}: {:8x} ({})".format(k, num, num))
# think i need that beer now
@@ -160,13 +169,13 @@ def configure_ssl_ciphers(al):
try:
ctx.set_ciphers(al.ciphers)
except:
print("\n\033[1;31mfailed to set ciphers\033[0m\n")
lprint("\n\033[1;31mfailed to set ciphers\033[0m\n")
if not hasattr(ctx, "get_ciphers"):
print("cannot read cipher list: openssl or python too old")
lprint("cannot read cipher list: openssl or python too old")
else:
ciphers = [x["description"] for x in ctx.get_ciphers()]
print("\n ".join(["\nenabled ciphers:"] + align_tab(ciphers) + [""]))
lprint("\n ".join(["\nenabled ciphers:"] + align_tab(ciphers) + [""]))
if is_help:
sys.exit(0)
@@ -249,30 +258,32 @@ def run_argparse(argv, formatter):
),
)
# fmt: off
ap.add_argument("-c", metavar="PATH", type=str, action="append", help="add config file")
ap.add_argument("-nc", metavar="NUM", type=int, default=64, help="max num clients")
ap.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores")
ap.add_argument("-a", metavar="ACCT", type=str, action="append", help="add account, USER:PASS; example [ed:wark")
ap.add_argument("-v", metavar="VOL", type=str, action="append", help="add volume, SRC:DST:FLAG; example [.::r], [/mnt/nas/music:/music:r:aed")
ap.add_argument("-ed", action="store_true", help="enable ?dots")
ap.add_argument("-emp", action="store_true", help="enable markdown plugins")
ap.add_argument("-mcr", metavar="SEC", type=int, default=60, help="md-editor mod-chk rate")
ap.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads")
ap.add_argument("--sparse", metavar="MiB", type=int, default=4, help="up2k min.size threshold (mswin-only)")
ap.add_argument("--urlform", metavar="MODE", type=str, default="print,get", help="how to handle url-forms; examples: [stash], [save,get]")
u = unicode
ap2 = ap.add_argument_group('general options')
ap2.add_argument("-c", metavar="PATH", type=u, action="append", help="add config file")
ap2.add_argument("-nc", metavar="NUM", type=int, default=64, help="max num clients")
ap2.add_argument("-j", metavar="CORES", type=int, default=1, help="max num cpu cores")
ap2.add_argument("-a", metavar="ACCT", type=u, action="append", help="add account, USER:PASS; example [ed:wark")
ap2.add_argument("-v", metavar="VOL", type=u, action="append", help="add volume, SRC:DST:FLAG; example [.::r], [/mnt/nas/music:/music:r:aed")
ap2.add_argument("-ed", action="store_true", help="enable ?dots")
ap2.add_argument("-emp", action="store_true", help="enable markdown plugins")
ap2.add_argument("-mcr", metavar="SEC", type=int, default=60, help="md-editor mod-chk rate")
ap2.add_argument("--dotpart", action="store_true", help="dotfile incomplete uploads")
ap2.add_argument("--sparse", metavar="MiB", type=int, default=4, help="up2k min.size threshold (mswin-only)")
ap2.add_argument("--urlform", metavar="MODE", type=u, default="print,get", help="how to handle url-forms; examples: [stash], [save,get]")
ap2 = ap.add_argument_group('network options')
ap2.add_argument("-i", metavar="IP", type=str, default="0.0.0.0", help="ip to bind (comma-sep.)")
ap2.add_argument("-p", metavar="PORT", type=str, default="3923", help="ports to bind (comma/range)")
ap2.add_argument("-i", metavar="IP", type=u, default="0.0.0.0", help="ip to bind (comma-sep.)")
ap2.add_argument("-p", metavar="PORT", type=u, default="3923", help="ports to bind (comma/range)")
ap2.add_argument("--rproxy", metavar="DEPTH", type=int, default=1, help="which ip to keep; 0 = tcp, 1 = origin (first x-fwd), 2 = cloudflare, 3 = nginx, -1 = closest proxy")
ap2 = ap.add_argument_group('SSL/TLS options')
ap2.add_argument("--http-only", action="store_true", help="disable ssl/tls")
ap2.add_argument("--https-only", action="store_true", help="disable plaintext")
ap2.add_argument("--ssl-ver", metavar="LIST", type=str, help="set allowed ssl/tls versions; [help] shows available versions; default is what your python version considers safe")
ap2.add_argument("--ciphers", metavar="LIST", help="set allowed ssl/tls ciphers; [help] shows available ciphers")
ap2.add_argument("--ssl-ver", metavar="LIST", type=u, help="set allowed ssl/tls versions; [help] shows available versions; default is what your python version considers safe")
ap2.add_argument("--ciphers", metavar="LIST", type=u, help="set allowed ssl/tls ciphers; [help] shows available ciphers")
ap2.add_argument("--ssl-dbg", action="store_true", help="dump some tls info")
ap2.add_argument("--ssl-log", metavar="PATH", help="log master secrets")
ap2.add_argument("--ssl-log", metavar="PATH", type=u, help="log master secrets")
ap2 = ap.add_argument_group('opt-outs')
ap2.add_argument("-nw", action="store_true", help="disable writes (benchmark)")
@@ -281,14 +292,16 @@ def run_argparse(argv, formatter):
ap2.add_argument("--no-zip", action="store_true", help="disable download as zip/tar")
ap2 = ap.add_argument_group('safety options')
ap2.add_argument("--ls", metavar="U[,V[,F]]", help="scan all volumes; arguments USER,VOL,FLAGS; example [**,*,ln,p,r]")
ap2.add_argument("--salt", type=str, default="hunter2", help="up2k file-hash salt")
ap2.add_argument("--ls", metavar="U[,V[,F]]", type=u, help="scan all volumes; arguments USER,VOL,FLAGS; example [**,*,ln,p,r]")
ap2.add_argument("--salt", type=u, default="hunter2", help="up2k file-hash salt")
ap2 = ap.add_argument_group('logging options')
ap2.add_argument("-q", action="store_true", help="quiet")
ap2.add_argument("-lo", metavar="PATH", type=u, help="logfile, example: cpp-%%Y-%%m%%d-%%H%%M%%S.txt.xz")
ap2.add_argument("--log-conn", action="store_true", help="print tcp-server msgs")
ap2.add_argument("--ihead", metavar="HEADER", action='append', help="dump incoming header")
ap2.add_argument("--lf-url", metavar="RE", type=str, default=r"^/\.cpr/|\?th=[wj]$", help="dont log URLs matching")
ap2.add_argument("--log-htp", action="store_true", help="print http-server threadpool scaling")
ap2.add_argument("--ihead", metavar="HEADER", type=u, action='append', help="dump incoming header")
ap2.add_argument("--lf-url", metavar="RE", type=u, default=r"^/\.cpr/|\?th=[wj]$", help="dont log URLs matching")
ap2 = ap.add_argument_group('admin panel options')
ap2.add_argument("--no-rescan", action="store_true", help="disable ?scan (volume reindexing)")
@@ -303,8 +316,9 @@ def run_argparse(argv, formatter):
ap2.add_argument("--th-no-webp", action="store_true", help="disable webp output")
ap2.add_argument("--th-ff-jpg", action="store_true", help="force jpg for video thumbs")
ap2.add_argument("--th-poke", metavar="SEC", type=int, default=300, help="activity labeling cooldown")
ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval")
ap2.add_argument("--th-clean", metavar="SEC", type=int, default=43200, help="cleanup interval; 0=disabled")
ap2.add_argument("--th-maxage", metavar="SEC", type=int, default=604800, help="max folder age")
ap2.add_argument("--th-covers", metavar="N,N", type=u, default="folder.png,folder.jpg,cover.png,cover.jpg", help="folder thumbnails to stat for")
ap2 = ap.add_argument_group('database options')
ap2.add_argument("-e2d", action="store_true", help="enable up2k database")
@@ -313,24 +327,25 @@ def run_argparse(argv, formatter):
ap2.add_argument("-e2t", action="store_true", help="enable metadata indexing")
ap2.add_argument("-e2ts", action="store_true", help="enable metadata scanner, sets -e2t")
ap2.add_argument("-e2tsr", action="store_true", help="rescan all metadata, sets -e2ts")
ap2.add_argument("--hist", metavar="PATH", type=str, help="where to store volume state")
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume state")
ap2.add_argument("--no-hash", action="store_true", help="disable hashing during e2ds folder scans")
ap2.add_argument("--no-mutagen", action="store_true", help="use ffprobe for tags instead")
ap2.add_argument("--no-mtag-mt", action="store_true", help="disable tag-read parallelism")
ap2.add_argument("-mtm", metavar="M=t,t,t", action="append", type=str, help="add/replace metadata mapping")
ap2.add_argument("-mte", metavar="M,M,M", type=str, help="tags to index/display (comma-sep.)",
ap2.add_argument("-mtm", metavar="M=t,t,t", type=u, action="append", help="add/replace metadata mapping")
ap2.add_argument("-mte", metavar="M,M,M", type=u, help="tags to index/display (comma-sep.)",
default="circle,album,.tn,artist,title,.bpm,key,.dur,.q,.vq,.aq,ac,vc,res,.fps")
ap2.add_argument("-mtp", metavar="M=[f,]bin", action="append", type=str, help="read tag M using bin")
ap2.add_argument("-mtp", metavar="M=[f,]bin", type=u, action="append", help="read tag M using bin")
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline")
ap2 = ap.add_argument_group('appearance options')
ap2.add_argument("--css-browser", metavar="L", help="URL to additional CSS to include")
ap2.add_argument("--css-browser", metavar="L", type=u, help="URL to additional CSS to include")
ap2 = ap.add_argument_group('debug options')
ap2.add_argument("--no-sendfile", action="store_true", help="disable sendfile")
ap2.add_argument("--no-scandir", action="store_true", help="disable scandir")
ap2.add_argument("--no-fastboot", action="store_true", help="wait for up2k indexing")
ap2.add_argument("--stackmon", metavar="P,S", help="write stacktrace to Path every S second")
ap2.add_argument("--no-htp", action="store_true", help="disable httpserver threadpool, create threads as-needed instead")
ap2.add_argument("--stackmon", metavar="P,S", type=u, help="write stacktrace to Path every S second")
return ap.parse_args(args=argv[1:])
# fmt: on
@@ -347,7 +362,7 @@ def main(argv=None):
desc = py_desc().replace("[", "\033[1;30m[")
f = '\033[36mcopyparty v{} "\033[35m{}\033[36m" ({})\n{}\033[0m\n'
print(f.format(S_VERSION, CODENAME, S_BUILD_DT, desc))
lprint(f.format(S_VERSION, CODENAME, S_BUILD_DT, desc))
ensure_locale()
if HAVE_SSL:
@@ -361,7 +376,7 @@ def main(argv=None):
continue
msg = "\033[1;31mWARNING:\033[0;1m\n {} \033[0;33mwas replaced with\033[0;1m {} \033[0;33mand will be removed\n\033[0m"
print(msg.format(dk, nk))
lprint(msg.format(dk, nk))
argv[idx] = nk
time.sleep(2)
@@ -415,7 +430,7 @@ def main(argv=None):
# signal.signal(signal.SIGINT, sighandler)
SvcHub(al).run()
SvcHub(al, argv, printed).run()
if __name__ == "__main__":

View File

@@ -1,8 +1,8 @@
# coding: utf-8
VERSION = (0, 11, 27)
VERSION = (0, 11, 36)
CODENAME = "the grid"
BUILD_DT = (2021, 6, 25)
BUILD_DT = (2021, 7, 11)
S_VERSION = ".".join(map(str, VERSION))
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)

View File

@@ -10,13 +10,14 @@ import hashlib
import threading
from .__init__ import WINDOWS
from .util import IMPLICATIONS, uncyg, undot, Pebkac, fsdec, fsenc, statdir, nuprint
from .util import IMPLICATIONS, uncyg, undot, Pebkac, fsdec, fsenc, statdir
class VFS(object):
"""single level in the virtual fs"""
def __init__(self, realpath, vpath, uread=[], uwrite=[], uadm=[], flags={}):
def __init__(self, log, realpath, vpath, uread=[], uwrite=[], uadm=[], flags={}):
self.log = log
self.realpath = realpath # absolute path on host filesystem
self.vpath = vpath # absolute path in the virtual filesystem
self.uread = uread # users who can read this
@@ -62,6 +63,7 @@ class VFS(object):
return self.nodes[name].add(src, dst)
vn = VFS(
self.log,
os.path.join(self.realpath, name) if self.realpath else None,
"{}/{}".format(self.vpath, name).lstrip("/"),
self.uread,
@@ -79,7 +81,7 @@ class VFS(object):
# leaf does not exist; create and keep permissions blank
vp = "{}/{}".format(self.vpath, dst).lstrip("/")
vn = VFS(src, vp)
vn = VFS(self.log, src, vp)
vn.dbv = self.dbv or self
self.nodes[dst] = vn
return vn
@@ -181,7 +183,7 @@ class VFS(object):
"""return user-readable [fsdir,real,virt] items at vpath"""
virt_vis = {} # nodes readable by user
abspath = self.canonical(rem)
real = list(statdir(nuprint, scandir, lstat, abspath))
real = list(statdir(self.log, scandir, lstat, abspath))
real.sort()
if not rem:
for name, vn2 in sorted(self.nodes.items()):
@@ -208,8 +210,13 @@ class VFS(object):
rem, uname, scandir, incl_wo=False, lstat=lstat
)
if seen and not fsroot.startswith(seen[-1]) and fsroot in seen:
print("bailing from symlink loop,\n {}\n {}".format(seen[-1], fsroot))
if (
seen
and (not fsroot.startswith(seen[-1]) or fsroot == seen[-1])
and fsroot in seen
):
m = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}/{}"
self.log("vfs.walk", m.format(seen[-1], fsroot, self.vpath, rem), 3)
return
seen = seen[:] + [fsroot]
@@ -242,6 +249,10 @@ class VFS(object):
if flt:
flt = {k: True for k in flt}
f1 = "{0}.hist{0}up2k.".format(os.sep)
f2a = os.sep + "dir.txt"
f2b = "{0}.hist{0}".format(os.sep)
for vpath, apath, files, rd, vd in self.walk(
"", vrem, [], uname, dots, scandir, False
):
@@ -275,7 +286,11 @@ class VFS(object):
del vd[x]
# up2k filetring based on actual abspath
files = [x for x in files if "{0}.hist{0}up2k.".format(os.sep) not in x[1]]
files = [
x
for x in files
if f1 not in x[1] and (not x[1].endswith(f2a) or f2b not in x[1])
]
for f in [{"vp": v, "ap": a, "st": n[1]} for v, a, n in files]:
yield f
@@ -466,7 +481,7 @@ class AuthSrv(object):
)
except:
m = "\n\033[1;31m\nerror in config file {} on line {}:\n\033[0m"
print(m.format(cfg_fn, self.line_ctr))
self.log(m.format(cfg_fn, self.line_ctr), 1)
raise
# case-insensitive; normalize
@@ -482,10 +497,10 @@ class AuthSrv(object):
if not mount:
# -h says our defaults are CWD at root and read/write for everyone
vfs = VFS(os.path.abspath("."), "", ["*"], ["*"])
vfs = VFS(self.log_func, os.path.abspath("."), "", ["*"], ["*"])
elif "" not in mount:
# there's volumes but no root; make root inaccessible
vfs = VFS(None, "")
vfs = VFS(self.log_func, None, "")
vfs.flags["d2d"] = True
maxdepth = 0
@@ -497,7 +512,13 @@ class AuthSrv(object):
if dst == "":
# rootfs was mapped; fully replaces the default CWD vfs
vfs = VFS(
mount[dst], dst, mread[dst], mwrite[dst], madm[dst], mflags[dst]
self.log_func,
mount[dst],
dst,
mread[dst],
mwrite[dst],
madm[dst],
mflags[dst],
)
continue
@@ -693,8 +714,10 @@ class AuthSrv(object):
self.user = user
self.iuser = {v: k for k, v in user.items()}
self.re_pwd = None
pwds = [re.escape(x) for x in self.iuser.keys()]
self.re_pwd = re.compile("=(" + "|".join(pwds) + ")([]&; ]|$)")
if pwds:
self.re_pwd = re.compile("=(" + "|".join(pwds) + ")([]&; ]|$)")
# import pprint
# pprint.pprint({"usr": user, "rd": mread, "wr": mwrite, "mnt": mount})
@@ -778,7 +801,7 @@ class AuthSrv(object):
msg = [x[1] for x in files]
if msg:
nuprint("\n".join(msg))
self.log("\n" + "\n".join(msg))
if n_bads and flag_p:
raise Exception("found symlink leaving volume, and strict is set")

View File

@@ -4,17 +4,11 @@ from __future__ import print_function, unicode_literals
import time
import threading
from .__init__ import PY2, WINDOWS, VT100
from .broker_util import try_exec
from .broker_mpw import MpWorker
from .util import mp
if PY2 and not WINDOWS:
from multiprocessing.reduction import ForkingPickler
from StringIO import StringIO as MemesIO # pylint: disable=import-error
class BrokerMp(object):
"""external api; manages MpWorkers"""
@@ -42,7 +36,6 @@ class BrokerMp(object):
proc.q_yield = q_yield
proc.nid = n
proc.clients = {}
proc.workload = 0
thr = threading.Thread(
target=self.collector, args=(proc,), name="mp-collector"
@@ -53,13 +46,6 @@ class BrokerMp(object):
self.procs.append(proc)
proc.start()
if not self.args.q:
thr = threading.Thread(
target=self.debug_load_balancer, name="mp-dbg-loadbalancer"
)
thr.daemon = True
thr.start()
def shutdown(self):
self.log("broker", "shutting down")
for n, proc in enumerate(self.procs):
@@ -89,20 +75,6 @@ class BrokerMp(object):
if dest == "log":
self.log(*args)
elif dest == "workload":
with self.mutex:
proc.workload = args[0]
elif dest == "httpdrop":
addr = args[0]
with self.mutex:
del proc.clients[addr]
if not proc.clients:
proc.workload = 0
self.hub.tcpsrv.num_clients.add(-1)
elif dest == "retq":
# response from previous ipc call
with self.retpend_mutex:
@@ -128,38 +100,9 @@ class BrokerMp(object):
returns a Queue object which eventually contains the response if want_retval
(not-impl here since nothing uses it yet)
"""
if dest == "httpconn":
sck, addr = args
sck2 = sck
if PY2:
buf = MemesIO()
ForkingPickler(buf).dump(sck)
sck2 = buf.getvalue()
proc = sorted(self.procs, key=lambda x: x.workload)[0]
proc.q_pend.put([0, dest, [sck2, addr]])
with self.mutex:
proc.clients[addr] = 50
proc.workload += 50
if dest == "listen":
for p in self.procs:
p.q_pend.put([0, dest, [args[0], len(self.procs)]])
else:
raise Exception("what is " + str(dest))
def debug_load_balancer(self):
fmt = "\033[1m{}\033[0;36m{:4}\033[0m "
if not VT100:
fmt = "({}{:4})"
last = ""
while self.procs:
msg = ""
for proc in self.procs:
msg += fmt.format(len(proc.clients), proc.workload)
if msg != last:
last = msg
with self.hub.log_mutex:
print(msg)
time.sleep(0.1)

View File

@@ -3,18 +3,13 @@ from __future__ import print_function, unicode_literals
from copyparty.authsrv import AuthSrv
import sys
import time
import signal
import threading
from .__init__ import PY2, WINDOWS
from .broker_util import ExceptionalQueue
from .httpsrv import HttpSrv
from .util import FAKE_MP
if PY2 and not WINDOWS:
import pickle # nosec
class MpWorker(object):
"""one single mp instance"""
@@ -25,10 +20,11 @@ class MpWorker(object):
self.args = args
self.n = n
self.log = self._log_disabled if args.q and not args.lo else self._log_enabled
self.retpend = {}
self.retpend_mutex = threading.Lock()
self.mutex = threading.Lock()
self.workload_thr_alive = False
# we inherited signal_handler from parent,
# replace it with something harmless
@@ -40,7 +36,6 @@ class MpWorker(object):
# instantiate all services here (TODO: inheritance?)
self.httpsrv = HttpSrv(self, True)
self.httpsrv.disconnect_func = self.httpdrop
# on winxp and some other platforms,
# use thr.join() to block all signals
@@ -53,15 +48,15 @@ class MpWorker(object):
# print('k')
pass
def log(self, src, msg, c=0):
def _log_enabled(self, src, msg, c=0):
self.q_yield.put([0, "log", [src, msg, c]])
def _log_disabled(self, src, msg, c=0):
pass
def logw(self, msg, c=0):
self.log("mp{}".format(self.n), msg, c)
def httpdrop(self, addr):
self.q_yield.put([0, "httpdrop", [addr]])
def main(self):
while True:
retq_id, dest, args = self.q_pend.get()
@@ -73,24 +68,8 @@ class MpWorker(object):
sys.exit(0)
return
elif dest == "httpconn":
sck, addr = args
if PY2:
sck = pickle.loads(sck) # nosec
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-qpop" % ("-" * 4,), c="1;30")
self.httpsrv.accept(sck, addr)
with self.mutex:
if not self.workload_thr_alive:
self.workload_thr_alive = True
thr = threading.Thread(
target=self.thr_workload, name="mpw-workload"
)
thr.daemon = True
thr.start()
elif dest == "listen":
self.httpsrv.listen(args[0], args[1])
elif dest == "retq":
# response from previous ipc call
@@ -114,16 +93,3 @@ class MpWorker(object):
self.q_yield.put([retq_id, dest, args])
return retq
def thr_workload(self):
"""announce workloads to MpSrv (the mp controller / loadbalancer)"""
# avoid locking in extract_filedata by tracking difference here
while True:
time.sleep(0.2)
with self.mutex:
if self.httpsrv.num_clients() == 0:
# no clients rn, termiante thread
self.workload_thr_alive = False
return
self.q_yield.put([0, "workload", [self.httpsrv.workload]])

View File

@@ -3,7 +3,6 @@ from __future__ import print_function, unicode_literals
import threading
from .authsrv import AuthSrv
from .httpsrv import HttpSrv
from .broker_util import ExceptionalQueue, try_exec
@@ -21,7 +20,6 @@ class BrokerThr(object):
# instantiate all services here (TODO: inheritance?)
self.httpsrv = HttpSrv(self)
self.httpsrv.disconnect_func = self.httpdrop
def shutdown(self):
# self.log("broker", "shutting down")
@@ -29,12 +27,8 @@ class BrokerThr(object):
pass
def put(self, want_retval, dest, *args):
if dest == "httpconn":
sck, addr = args
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-qpop" % ("-" * 4,), c="1;30")
self.httpsrv.accept(sck, addr)
if dest == "listen":
self.httpsrv.listen(args[0], 1)
else:
# new ipc invoking managed service in hub
@@ -51,6 +45,3 @@ class BrokerThr(object):
retq = ExceptionalQueue(1)
retq.put(rv)
return retq
def httpdrop(self, addr):
self.hub.tcpsrv.num_clients.add(-1)

View File

@@ -13,15 +13,12 @@ import ctypes
from datetime import datetime
import calendar
from .__init__ import E, PY2, WINDOWS, ANYWIN
from .__init__ import E, PY2, WINDOWS, ANYWIN, unicode
from .util import * # noqa # pylint: disable=unused-wildcard-import
from .authsrv import AuthSrv
from .szip import StreamZip
from .star import StreamTar
if not PY2:
unicode = str
NO_CACHE = {"Cache-Control": "no-cache"}
NO_STORE = {"Cache-Control": "no-store; max-age=0"}
@@ -55,7 +52,7 @@ class HttpCli(object):
def log(self, msg, c=0):
ptn = self.asrv.re_pwd
if ptn.search(msg):
if ptn and ptn.search(msg):
msg = ptn.sub(self.unpwd, msg)
self.log_func(self.log_src, msg, c)
@@ -72,9 +69,13 @@ class HttpCli(object):
if rem.startswith("/") or rem.startswith("../") or "/../" in rem:
raise Exception("that was close")
def j2(self, name, **kwargs):
def j2(self, name, **ka):
tpl = self.conn.hsrv.j2[name]
return tpl.render(**kwargs) if kwargs else tpl
if ka:
ka["ts"] = self.conn.hsrv.cachebuster()
return tpl.render(**ka)
return tpl
def run(self):
"""returns true if connection can be reused"""
@@ -94,9 +95,13 @@ class HttpCli(object):
try:
self.mode, self.req, self.http_ver = headerlines[0].split(" ")
except:
raise Pebkac(400, "bad headers:\n" + "\n".join(headerlines))
msg = " ]\n#[ ".join(headerlines)
raise Pebkac(400, "bad headers:\n#[ " + msg + " ]")
except Pebkac as ex:
self.mode = "GET"
self.req = "[junk]"
self.http_ver = "HTTP/1.1"
# self.log("pebkac at httpcli.run #1: " + repr(ex))
self.keepalive = self._check_nonfatal(ex)
self.loud_reply(unicode(ex), status=ex.code)
@@ -474,15 +479,17 @@ class HttpCli(object):
addr = self.ip.replace(":", ".")
fn = "put-{:.6f}-{}.bin".format(time.time(), addr)
path = os.path.join(fdir, fn)
if self.args.nw:
path = os.devnull
with open(fsenc(path), "wb", 512 * 1024) as f:
post_sz, _, sha_b64 = hashcopy(self.conn, reader, f)
post_sz, _, sha_b64 = hashcopy(reader, f)
vfs, vrem = vfs.get_dbv(rem)
self.conn.hsrv.broker.put(
False, "up2k.hash_file", vfs.realpath, vfs.flags, vrem, fn
)
if not self.args.nw:
vfs, vrem = vfs.get_dbv(rem)
self.conn.hsrv.broker.put(
False, "up2k.hash_file", vfs.realpath, vfs.flags, vrem, fn
)
return post_sz, sha_b64, remains, path
@@ -499,7 +506,7 @@ class HttpCli(object):
spd1 = get_spd(nbytes, self.t0)
spd2 = get_spd(self.conn.nbyte, self.conn.t0)
return spd1 + " " + spd2
return "{} {} n{}".format(spd1, spd2, self.conn.nreq)
def handle_post_multipart(self):
self.parser = MultipartParser(self.log, self.sr, self.headers)
@@ -603,13 +610,14 @@ class HttpCli(object):
os.makedirs(fsenc(dst))
except OSError as ex:
self.log("makedirs failed [{}]".format(dst))
if ex.errno == 13:
raise Pebkac(500, "the server OS denied write-access")
if not os.path.isdir(fsenc(dst)):
if ex.errno == 13:
raise Pebkac(500, "the server OS denied write-access")
if ex.errno == 17:
raise Pebkac(400, "some file got your folder name")
if ex.errno == 17:
raise Pebkac(400, "some file got your folder name")
raise Pebkac(500, min_ex())
raise Pebkac(500, min_ex())
except:
raise Pebkac(500, min_ex())
@@ -707,7 +715,7 @@ class HttpCli(object):
with open(fsenc(path), "rb+", 512 * 1024) as f:
f.seek(cstart[0])
post_sz, _, sha_b64 = hashcopy(self.conn, reader, f)
post_sz, _, sha_b64 = hashcopy(reader, f)
if sha_b64 != chash:
raise Pebkac(
@@ -874,7 +882,7 @@ class HttpCli(object):
with ren_open(fname, "wb", 512 * 1024, **open_args) as f:
f, fname = f["orz"]
self.log("writing to {}/{}".format(fdir, fname))
sz, sha512_hex, _ = hashcopy(self.conn, p_data, f)
sz, sha512_hex, _ = hashcopy(p_data, f)
if sz == 0:
raise Pebkac(400, "empty files in post")
@@ -1057,7 +1065,7 @@ class HttpCli(object):
raise Pebkac(400, "expected body, got {}".format(p_field))
with open(fsenc(fp), "wb", 512 * 1024) as f:
sz, sha512, _ = hashcopy(self.conn, p_data, f)
sz, sha512, _ = hashcopy(p_data, f)
new_lastmod = os.stat(fsenc(fp)).st_mtime
new_lastmod3 = int(new_lastmod * 1000)
@@ -1247,8 +1255,7 @@ class HttpCli(object):
if use_sendfile:
remains = sendfile_kern(lower, upper, f, self.s)
else:
actor = self.conn if self.is_mp else None
remains = sendfile_py(lower, upper, f, self.s, actor)
remains = sendfile_py(lower, upper, f, self.s)
if remains > 0:
logmsg += " \033[31m" + unicode(upper - remains) + "\033[0m"
@@ -1383,6 +1390,7 @@ class HttpCli(object):
"md_plug": "true" if self.args.emp else "false",
"md_chk_rate": self.args.mcr,
"md": boundary,
"ts": self.conn.hsrv.cachebuster(),
}
html = template.render(**targs).encode("utf-8", "replace")
html = html.split(boundary.encode("utf-8"))
@@ -1465,7 +1473,7 @@ class HttpCli(object):
raise Pebkac(500, x)
def tx_stack(self):
if not self.readable or not self.writable:
if not self.avol:
raise Pebkac(403, "not admin")
if self.args.no_stack:
@@ -1555,14 +1563,16 @@ class HttpCli(object):
raise Pebkac(404)
if self.readable:
if rem.startswith(".hist/up2k."):
if rem.startswith(".hist/up2k.") or (
rem.endswith("/dir.txt") and rem.startswith(".hist/th/")
):
raise Pebkac(403)
is_dir = stat.S_ISDIR(st.st_mode)
th_fmt = self.uparam.get("th")
if th_fmt is not None:
if is_dir:
for fn in ["folder.png", "folder.jpg"]:
for fn in self.args.th_covers.split(","):
fp = os.path.join(abspath, fn)
if os.path.exists(fp):
vrem = "{}/{}".format(vrem.rstrip("/"), fn)
@@ -1626,7 +1636,6 @@ class HttpCli(object):
url_suf = self.urlq()
is_ls = "ls" in self.uparam
ts = "" # "?{}".format(time.time())
tpl = "browser"
if "b" in self.uparam:
@@ -1651,7 +1660,6 @@ class HttpCli(object):
"vdir": quotep(self.vpath),
"vpnodes": vpnodes,
"files": [],
"ts": ts,
"perms": json.dumps(perms),
"taglist": [],
"tag_order": [],

View File

@@ -43,8 +43,8 @@ class HttpConn(object):
self.t0 = time.time()
self.stopping = False
self.nreq = 0
self.nbyte = 0
self.workload = 0
self.u2idx = None
self.log_func = hsrv.log
self.lf_url = re.compile(self.args.lf_url) if self.args.lf_url else None
@@ -183,11 +183,7 @@ class HttpConn(object):
self.sr = Unrecv(self.s)
while not self.stopping:
if self.is_mp:
self.workload += 50
if self.workload >= 2 ** 31:
self.workload = 100
self.nreq += 1
cli = HttpCli(self)
if not cli.run():
return

View File

@@ -4,6 +4,8 @@ from __future__ import print_function, unicode_literals
import os
import sys
import time
import math
import base64
import socket
import threading
@@ -24,10 +26,15 @@ except ImportError:
)
sys.exit(1)
from .__init__ import E, MACOS
from .authsrv import AuthSrv
from .__init__ import E, PY2, MACOS
from .util import spack, min_ex
from .httpconn import HttpConn
if PY2:
import Queue as queue
else:
import queue
class HttpSrv(object):
"""
@@ -42,12 +49,21 @@ class HttpSrv(object):
self.log = broker.log
self.asrv = broker.asrv
self.disconnect_func = None
self.name = "httpsrv-i{:x}".format(os.getpid())
self.mutex = threading.Lock()
self.stopping = False
self.clients = {}
self.workload = 0
self.workload_thr_alive = False
self.tp_nthr = 0 # actual
self.tp_ncli = 0 # fading
self.tp_time = None # latest worker collect
self.tp_q = None if self.args.no_htp else queue.LifoQueue()
self.srvs = []
self.ncli = 0 # exact
self.clients = {} # laggy
self.nclimax = 0
self.cb_ts = 0
self.cb_v = 0
env = jinja2.Environment()
env.loader = jinja2.FileSystemLoader(os.path.join(E.mod, "web"))
@@ -62,10 +78,105 @@ class HttpSrv(object):
else:
self.cert_path = None
if self.tp_q:
self.start_threads(4)
t = threading.Thread(target=self.thr_scaler)
t.daemon = True
t.start()
def start_threads(self, n):
self.tp_nthr += n
if self.args.log_htp:
self.log(self.name, "workers += {} = {}".format(n, self.tp_nthr), 6)
for _ in range(n):
thr = threading.Thread(
target=self.thr_poolw,
name="httpsrv-poolw",
)
thr.daemon = True
thr.start()
def stop_threads(self, n):
self.tp_nthr -= n
if self.args.log_htp:
self.log(self.name, "workers -= {} = {}".format(n, self.tp_nthr), 6)
for _ in range(n):
self.tp_q.put(None)
def thr_scaler(self):
while True:
time.sleep(2 if self.tp_ncli else 30)
with self.mutex:
self.tp_ncli = max(self.ncli, self.tp_ncli - 2)
if self.tp_nthr > self.tp_ncli + 8:
self.stop_threads(4)
def listen(self, sck, nlisteners):
self.srvs.append(sck)
self.nclimax = math.ceil(self.args.nc * 1.0 / nlisteners)
t = threading.Thread(target=self.thr_listen, args=(sck,))
t.daemon = True
t.start()
def thr_listen(self, srv_sck):
"""listens on a shared tcp server"""
ip, port = srv_sck.getsockname()
fno = srv_sck.fileno()
msg = "subscribed @ {}:{} f{}".format(ip, port, fno)
self.log(self.name, msg)
while not self.stopping:
if self.args.log_conn:
self.log(self.name, "|%sC-ncli" % ("-" * 1,), c="1;30")
if self.ncli >= self.nclimax:
self.log(self.name, "at connection limit; waiting", 3)
while self.ncli >= self.nclimax:
time.sleep(0.1)
if self.args.log_conn:
self.log(self.name, "|%sC-acc1" % ("-" * 2,), c="1;30")
try:
sck, addr = srv_sck.accept()
except (OSError, socket.error) as ex:
self.log(self.name, "accept({}): {}".format(fno, ex), c=6)
time.sleep(0.02)
continue
if self.args.log_conn:
m = "|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
"-" * 3, ip, port % 8, port
)
self.log("%s %s" % addr, m, c="1;30")
self.accept(sck, addr)
def accept(self, sck, addr):
"""takes an incoming tcp connection and creates a thread to handle it"""
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-cthr" % ("-" * 5,), c="1;30")
now = time.time()
if self.tp_time and now - self.tp_time > 300:
self.tp_q = None
if self.tp_q:
self.tp_q.put((sck, addr))
with self.mutex:
self.ncli += 1
self.tp_time = self.tp_time or now
self.tp_ncli = max(self.tp_ncli, self.ncli + 1)
if self.tp_nthr < self.ncli + 4:
self.start_threads(8)
return
if not self.args.no_htp:
m = "looks like the httpserver threadpool died; please make an issue on github and tell me the story of how you pulled that off, thanks and dog bless\n"
self.log(self.name, m, 1)
with self.mutex:
self.ncli += 1
thr = threading.Thread(
target=self.thr_client,
@@ -75,11 +186,34 @@ class HttpSrv(object):
thr.daemon = True
thr.start()
def num_clients(self):
with self.mutex:
return len(self.clients)
def thr_poolw(self):
while True:
task = self.tp_q.get()
if not task:
break
with self.mutex:
self.tp_time = None
try:
sck, addr = task
me = threading.current_thread()
me.name = (
"httpsrv-{}-{}".format(addr[0].split(".", 2)[-1][-6:], addr[1]),
)
self.thr_client(sck, addr)
me.name = "httpsrv-poolw"
except:
self.log(self.name, "thr_client: " + min_ex(), 3)
def shutdown(self):
self.stopping = True
for srv in self.srvs:
try:
srv.close()
except:
pass
clients = list(self.clients.keys())
for cli in clients:
try:
@@ -87,7 +221,14 @@ class HttpSrv(object):
except:
pass
self.log("httpsrv-n", "ok bye")
if self.tp_q:
self.stop_threads(self.tp_nthr)
for _ in range(10):
time.sleep(0.05)
if self.tp_q.empty():
break
self.log("httpsrv-i" + str(os.getpid()), "ok bye")
def thr_client(self, sck, addr):
"""thread managing one tcp client"""
@@ -97,25 +238,15 @@ class HttpSrv(object):
with self.mutex:
self.clients[cli] = 0
if self.is_mp:
self.workload += 50
if not self.workload_thr_alive:
self.workload_thr_alive = True
thr = threading.Thread(
target=self.thr_workload, name="httpsrv-workload"
)
thr.daemon = True
thr.start()
fno = sck.fileno()
try:
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-crun" % ("-" * 6,), c="1;30")
self.log("%s %s" % addr, "|%sC-crun" % ("-" * 4,), c="1;30")
cli.run()
except (OSError, socket.error) as ex:
if ex.errno not in [10038, 10054, 107, 57, 9]:
if ex.errno not in [10038, 10054, 107, 57, 49, 9]:
self.log(
"%s %s" % addr,
"run({}): {}".format(fno, ex),
@@ -125,7 +256,7 @@ class HttpSrv(object):
finally:
sck = cli.s
if self.args.log_conn:
self.log("%s %s" % addr, "|%sC-cdone" % ("-" * 7,), c="1;30")
self.log("%s %s" % addr, "|%sC-cdone" % ("-" * 5,), c="1;30")
try:
fno = sck.fileno()
@@ -138,42 +269,37 @@ class HttpSrv(object):
"shut({}): {}".format(fno, ex),
c="1;30",
)
if ex.errno not in [10038, 10054, 107, 57, 9]:
if ex.errno not in [10038, 10054, 107, 57, 49, 9]:
# 10038 No longer considered a socket
# 10054 Foribly closed by remote
# 107 Transport endpoint not connected
# 57 Socket is not connected
# 49 Can't assign requested address (wifi down)
# 9 Bad file descriptor
raise
finally:
with self.mutex:
del self.clients[cli]
self.ncli -= 1
if self.disconnect_func:
self.disconnect_func(addr) # pylint: disable=not-callable
def cachebuster(self):
if time.time() - self.cb_ts < 1:
return self.cb_v
def thr_workload(self):
"""indicates the python interpreter workload caused by this HttpSrv"""
# avoid locking in extract_filedata by tracking difference here
while True:
time.sleep(0.2)
with self.mutex:
if not self.clients:
# no clients rn, termiante thread
self.workload_thr_alive = False
self.workload = 0
return
with self.mutex:
if time.time() - self.cb_ts < 1:
return self.cb_v
total = 0
with self.mutex:
for cli in self.clients.keys():
now = cli.workload
delta = now - self.clients[cli]
if delta < 0:
# was reset in HttpCli to prevent overflow
delta = now
v = E.t0
try:
with os.scandir(os.path.join(E.mod, "web")) as dh:
for fh in dh:
inf = fh.stat(follow_symlinks=False)
v = max(v, inf.st_mtime)
except:
pass
total += delta
self.clients[cli] = now
self.workload = total
v = base64.urlsafe_b64encode(spack(b">xxL", int(v)))
self.cb_v = v.decode("ascii")[-4:]
self.cb_ts = time.time()
return self.cb_v

View File

@@ -7,12 +7,9 @@ import json
import shutil
import subprocess as sp
from .__init__ import PY2, WINDOWS
from .__init__ import PY2, WINDOWS, unicode
from .util import fsenc, fsdec, uncyg, REKOBO_LKEY
if not PY2:
unicode = str
def have_ff(cmd):
if PY2:

View File

@@ -5,11 +5,12 @@ import re
import os
import sys
import time
import shlex
import threading
from datetime import datetime, timedelta
import calendar
from .__init__ import PY2, WINDOWS, MACOS, VT100
from .__init__ import E, PY2, WINDOWS, MACOS, VT100
from .util import mp
from .authsrv import AuthSrv
from .tcpsrv import TcpSrv
@@ -28,14 +29,18 @@ class SvcHub(object):
put() can return a queue (if want_reply=True) which has a blocking get() with the response.
"""
def __init__(self, args):
def __init__(self, args, argv, printed):
self.args = args
self.argv = argv
self.logf = None
self.ansi_re = re.compile("\033\\[[^m]*m")
self.log_mutex = threading.Lock()
self.next_day = 0
self.log = self._log_disabled if args.q else self._log_enabled
if args.lo:
self._setup_logfile(printed)
# initiate all services to manage
self.asrv = AuthSrv(self.args, self.log, False)
@@ -69,6 +74,52 @@ class SvcHub(object):
self.broker = Broker(self)
def _logname(self):
dt = datetime.utcfromtimestamp(time.time())
fn = self.args.lo
for fs in "YmdHMS":
fs = "%" + fs
if fs in fn:
fn = fn.replace(fs, dt.strftime(fs))
return fn
def _setup_logfile(self, printed):
base_fn = fn = sel_fn = self._logname()
if fn != self.args.lo:
ctr = 0
# yup this is a race; if started sufficiently concurrently, two
# copyparties can grab the same logfile (considered and ignored)
while os.path.exists(sel_fn):
ctr += 1
sel_fn = "{}.{}".format(fn, ctr)
fn = sel_fn
try:
import lzma
lh = lzma.open(fn, "wt", encoding="utf-8", errors="replace", preset=0)
except:
import codecs
lh = codecs.open(fn, "w", encoding="utf-8", errors="replace")
lh.base_fn = base_fn
argv = [sys.executable] + self.argv
if hasattr(shlex, "quote"):
argv = [shlex.quote(x) for x in argv]
else:
argv = ['"{}"'.format(x) for x in argv]
msg = "[+] opened logfile [{}]\n".format(fn)
printed += msg
lh.write("t0: {:.3f}\nargv: {}\n\n{}".format(E.t0, " ".join(argv), printed))
self.logf = lh
print(msg, end="")
def run(self):
thr = threading.Thread(target=self.tcpsrv.run, name="svchub-main")
thr.daemon = True
@@ -99,9 +150,36 @@ class SvcHub(object):
print("nailed it", end="")
finally:
print("\033[0m")
if self.logf:
self.logf.close()
def _log_disabled(self, src, msg, c=0):
pass
if not self.logf:
return
with self.log_mutex:
ts = datetime.utcfromtimestamp(time.time())
ts = ts.strftime("%Y-%m%d-%H%M%S.%f")[:-3]
self.logf.write("@{} [{}] {}\n".format(ts, src, msg))
now = time.time()
if now >= self.next_day:
self._set_next_day()
def _set_next_day(self):
if self.next_day and self.logf and self.logf.base_fn != self._logname():
self.logf.close()
self._setup_logfile("")
dt = datetime.utcfromtimestamp(time.time())
# unix timestamp of next 00:00:00 (leap-seconds safe)
day_now = dt.day
while dt.day == day_now:
dt += timedelta(hours=12)
dt = dt.replace(hour=0, minute=0, second=0)
self.next_day = calendar.timegm(dt.utctimetuple())
def _log_enabled(self, src, msg, c=0):
"""handles logging from all components"""
@@ -110,14 +188,7 @@ class SvcHub(object):
if now >= self.next_day:
dt = datetime.utcfromtimestamp(now)
print("\033[36m{}\033[0m\n".format(dt.strftime("%Y-%m-%d")), end="")
# unix timestamp of next 00:00:00 (leap-seconds safe)
day_now = dt.day
while dt.day == day_now:
dt += timedelta(hours=12)
dt = dt.replace(hour=0, minute=0, second=0)
self.next_day = calendar.timegm(dt.utctimetuple())
self._set_next_day()
fmt = "\033[36m{} \033[33m{:21} \033[0m{}\n"
if not VT100:
@@ -144,20 +215,20 @@ class SvcHub(object):
except:
print(msg.encode("ascii", "replace").decode(), end="")
if self.logf:
self.logf.write(msg)
def check_mp_support(self):
vmin = sys.version_info[1]
if WINDOWS:
msg = "need python 3.3 or newer for multiprocessing;"
if PY2:
# py2 pickler doesn't support winsock
return msg
elif vmin < 3:
if PY2 or vmin < 3:
return msg
elif MACOS:
return "multiprocessing is wonky on mac osx;"
else:
msg = "need python 2.7 or 3.3+ for multiprocessing;"
if not PY2 and vmin < 3:
msg = "need python 3.3+ for multiprocessing;"
if PY2 or vmin < 3:
return msg
try:
@@ -189,5 +260,5 @@ class SvcHub(object):
if not err:
return True
else:
self.log("root", err)
self.log("svchub", err)
return False

View File

@@ -4,15 +4,14 @@ from __future__ import print_function, unicode_literals
import os
import time
import zlib
import struct
from datetime import datetime
from .sutil import errdesc
from .util import yieldfile, sanitize_fn
from .util import yieldfile, sanitize_fn, spack, sunpack
def dostime2unix(buf):
t, d = struct.unpack("<HH", buf)
t, d = sunpack(b"<HH", buf)
ts = (t & 0x1F) * 2
tm = (t >> 5) & 0x3F
@@ -36,13 +35,13 @@ def unixtime2dos(ts):
bd = ((dy - 1980) << 9) + (dm << 5) + dd
bt = (th << 11) + (tm << 5) + ts // 2
return struct.pack("<HH", bt, bd)
return spack(b"<HH", bt, bd)
def gen_fdesc(sz, crc32, z64):
ret = b"\x50\x4b\x07\x08"
fmt = "<LQQ" if z64 else "<LLL"
ret += struct.pack(fmt, crc32, sz, sz)
fmt = b"<LQQ" if z64 else b"<LLL"
ret += spack(fmt, crc32, sz, sz)
return ret
@@ -66,7 +65,7 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
req_ver = b"\x2d\x00" if z64 else b"\x0a\x00"
if crc32:
crc32 = struct.pack("<L", crc32)
crc32 = spack(b"<L", crc32)
else:
crc32 = b"\x00" * 4
@@ -87,14 +86,14 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
# however infozip does actual sz and it even works on winxp
# (same reasning for z64 extradata later)
vsz = 0xFFFFFFFF if z64 else sz
ret += struct.pack("<LL", vsz, vsz)
ret += spack(b"<LL", vsz, vsz)
# windows support (the "?" replace below too)
fn = sanitize_fn(fn, ok="/")
bfn = fn.encode("utf-8" if utf8 else "cp437", "replace").replace(b"?", b"_")
z64_len = len(z64v) * 8 + 4 if z64v else 0
ret += struct.pack("<HH", len(bfn), z64_len)
ret += spack(b"<HH", len(bfn), z64_len)
if h_pos is not None:
# 2b comment, 2b diskno
@@ -106,12 +105,12 @@ def gen_hdr(h_pos, fn, sz, lastmod, utf8, crc32, pre_crc):
ret += b"\x01\x00\x00\x00\xa4\x81"
# 4b local-header-ofs
ret += struct.pack("<L", min(h_pos, 0xFFFFFFFF))
ret += spack(b"<L", min(h_pos, 0xFFFFFFFF))
ret += bfn
if z64v:
ret += struct.pack("<HH" + "Q" * len(z64v), 1, len(z64v) * 8, *z64v)
ret += spack(b"<HH" + b"Q" * len(z64v), 1, len(z64v) * 8, *z64v)
return ret
@@ -136,7 +135,7 @@ def gen_ecdr(items, cdir_pos, cdir_end):
need_64 = nitems == 0xFFFF or 0xFFFFFFFF in [csz, cpos]
# 2b tnfiles, 2b dnfiles, 4b dir sz, 4b dir pos
ret += struct.pack("<HHLL", nitems, nitems, csz, cpos)
ret += spack(b"<HHLL", nitems, nitems, csz, cpos)
# 2b comment length
ret += b"\x00\x00"
@@ -163,7 +162,7 @@ def gen_ecdr64(items, cdir_pos, cdir_end):
# 8b tnfiles, 8b dnfiles, 8b dir sz, 8b dir pos
cdir_sz = cdir_end - cdir_pos
ret += struct.pack("<QQQQ", len(items), len(items), cdir_sz, cdir_pos)
ret += spack(b"<QQQQ", len(items), len(items), cdir_sz, cdir_pos)
return ret
@@ -178,7 +177,7 @@ def gen_ecdr64_loc(ecdr64_pos):
ret = b"\x50\x4b\x06\x07"
# 4b cdisk, 8b start of ecdr64, 4b ndisks
ret += struct.pack("<LQL", 0, ecdr64_pos, 1)
ret += spack(b"<LQL", 0, ecdr64_pos, 1)
return ret

View File

@@ -2,11 +2,9 @@
from __future__ import print_function, unicode_literals
import re
import time
import socket
import select
from .util import chkcmd, Counter
from .util import chkcmd
class TcpSrv(object):
@@ -20,7 +18,6 @@ class TcpSrv(object):
self.args = hub.args
self.log = hub.log
self.num_clients = Counter()
self.stopping = False
ip = "127.0.0.1"
@@ -66,44 +63,13 @@ class TcpSrv(object):
for srv in self.srv:
srv.listen(self.args.nc)
ip, port = srv.getsockname()
self.log("tcpsrv", "listening @ {0}:{1}".format(ip, port))
fno = srv.fileno()
msg = "listening @ {}:{} f{}".format(ip, port, fno)
self.log("tcpsrv", msg)
if self.args.q:
print(msg)
while not self.stopping:
if self.args.log_conn:
self.log("tcpsrv", "|%sC-ncli" % ("-" * 1,), c="1;30")
if self.num_clients.v >= self.args.nc:
time.sleep(0.1)
continue
if self.args.log_conn:
self.log("tcpsrv", "|%sC-acc1" % ("-" * 2,), c="1;30")
try:
# macos throws bad-fd
ready, _, _ = select.select(self.srv, [], [])
except:
ready = []
if not self.stopping:
raise
for srv in ready:
if self.stopping:
break
sck, addr = srv.accept()
sip, sport = srv.getsockname()
if self.args.log_conn:
self.log(
"%s %s" % addr,
"|{}C-acc2 \033[0;36m{} \033[3{}m{}".format(
"-" * 3, sip, sport % 8, sport
),
c="1;30",
)
self.num_clients.add()
self.hub.broker.put(False, "httpconn", sck, addr)
self.hub.broker.put(False, "listen", srv)
def shutdown(self):
self.stopping = True

View File

@@ -9,15 +9,11 @@ import hashlib
import threading
import subprocess as sp
from .__init__ import PY2
from .__init__ import PY2, unicode
from .util import fsenc, runcmd, Queue, Cooldown, BytesIO, min_ex
from .mtag import HAVE_FFMPEG, HAVE_FFPROBE, ffprobe
if not PY2:
unicode = str
HAVE_PIL = False
HAVE_HEIF = False
HAVE_AVIF = False
@@ -53,7 +49,7 @@ except:
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
# ffmpeg -formats
FMT_PIL = "bmp dib gif icns ico jpg jpeg jp2 jpx pcx png pbm pgm ppm pnm sgi tga tif tiff webp xbm dds xpm"
FMT_FF = "av1 asf avi flv m4v mkv mjpeg mjpg mpg mpeg mpg2 mpeg2 h264 avc h265 hevc mov 3gp mp4 ts mpegts nut ogv ogm rm vob webm wmv"
FMT_FF = "av1 asf avi flv m4v mkv mjpeg mjpg mpg mpeg mpg2 mpeg2 h264 avc mts h265 hevc mov 3gp mp4 ts mpegts nut ogv ogm rm vob webm wmv"
if HAVE_HEIF:
FMT_PIL += " heif heifs heic heics"
@@ -134,9 +130,10 @@ class ThumbSrv(object):
msg += ", ".join(missing)
self.log(msg, c=3)
t = threading.Thread(target=self.cleaner, name="thumb-cleaner")
t.daemon = True
t.start()
if self.args.th_clean:
t = threading.Thread(target=self.cleaner, name="thumb-cleaner")
t.daemon = True
t.start()
def log(self, msg, c=0):
self.log_func("thumb", msg, c)

View File

@@ -103,13 +103,15 @@ class Up2k(object):
self.deferred_init()
else:
t = threading.Thread(
target=self.deferred_init,
name="up2k-deferred-init",
target=self.deferred_init, name="up2k-deferred-init", args=(0.5,)
)
t.daemon = True
t.start()
def deferred_init(self):
def deferred_init(self, wait=0):
if wait:
time.sleep(wait)
all_vols = self.asrv.vfs.all_vols
have_e2d = self.init_indexes(all_vols)
@@ -342,7 +344,15 @@ class Up2k(object):
for k, v in flags.items()
]
if a:
self.log(" ".join(sorted(a)) + "\033[0m")
vpath = "?"
for k, v in self.asrv.vfs.all_vols.items():
if v.realpath == ptop:
vpath = k
if vpath:
vpath += "/"
self.log("/{} {}".format(vpath, " ".join(sorted(a))), "35")
reg = {}
path = os.path.join(histpath, "up2k.snap")
@@ -401,7 +411,7 @@ class Up2k(object):
if WINDOWS:
excl = [x.replace("/", "\\") for x in excl]
n_add = self._build_dir(dbw, top, set(excl), top, nohash)
n_add = self._build_dir(dbw, top, set(excl), top, nohash, [])
n_rm = self._drop_lost(dbw[0], top)
if dbw[1]:
self.log("commit {} new files".format(dbw[1]))
@@ -409,11 +419,25 @@ class Up2k(object):
return True, n_add or n_rm or do_vac
def _build_dir(self, dbw, top, excl, cdir, nohash):
def _build_dir(self, dbw, top, excl, cdir, nohash, seen):
rcdir = cdir
if not ANYWIN:
try:
# a bit expensive but worth
rcdir = os.path.realpath(cdir)
except:
pass
if rcdir in seen:
m = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}"
self.log(m.format(seen[-1], rcdir, cdir), 3)
return 0
seen = seen + [cdir]
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
histpath = self.asrv.vfs.histtab[top]
ret = 0
g = statdir(self.log, not self.args.no_scandir, False, cdir)
g = statdir(self.log_func, not self.args.no_scandir, False, cdir)
for iname, inf in sorted(g):
abspath = os.path.join(cdir, iname)
lmod = int(inf.st_mtime)
@@ -422,7 +446,7 @@ class Up2k(object):
if abspath in excl or abspath == histpath:
continue
# self.log(" dir: {}".format(abspath))
ret += self._build_dir(dbw, top, excl, abspath, nohash)
ret += self._build_dir(dbw, top, excl, abspath, nohash, seen)
else:
# self.log("file: {}".format(abspath))
rp = abspath[len(top) + 1 :]
@@ -1019,7 +1043,8 @@ class Up2k(object):
break
except:
# missing; restart
job = None
if not self.args.nw:
job = None
break
else:
# file contents match, but not the path
@@ -1046,8 +1071,9 @@ class Up2k(object):
pdir = os.path.join(cj["ptop"], cj["prel"])
job["name"] = self._untaken(pdir, cj["name"], now, cj["addr"])
dst = os.path.join(job["ptop"], job["prel"], job["name"])
os.unlink(fsenc(dst)) # TODO ed pls
self._symlink(src, dst)
if not self.args.nw:
os.unlink(fsenc(dst)) # TODO ed pls
self._symlink(src, dst)
if not job:
job = {
@@ -1089,6 +1115,9 @@ class Up2k(object):
}
def _untaken(self, fdir, fname, ts, ip):
if self.args.nw:
return fname
# TODO broker which avoid this race and
# provides a new filename if taken (same as bup)
suffix = ".{:.6f}-{}".format(ts, ip)
@@ -1098,6 +1127,9 @@ class Up2k(object):
def _symlink(self, src, dst):
# TODO store this in linktab so we never delete src if there are links to it
self.log("linking dupe:\n {0}\n {1}".format(src, dst))
if self.args.nw:
return
try:
lsrc = src
ldst = dst
@@ -1175,6 +1207,10 @@ class Up2k(object):
if ret > 0:
return ret, src
if self.args.nw:
# del self.registry[ptop][wark]
return ret, dst
atomic_move(src, dst)
if ANYWIN:
@@ -1284,6 +1320,10 @@ class Up2k(object):
if self.args.dotpart:
tnam = "." + tnam
if self.args.nw:
job["tnam"] = tnam
return
suffix = ".{:.6f}-{}".format(job["t0"], job["addr"])
with ren_open(tnam, "wb", fdir=pdir, suffix=suffix) as f:
f, job["tnam"] = f["orz"]

View File

@@ -42,6 +42,20 @@ else:
from Queue import Queue # pylint: disable=import-error,no-name-in-module
from StringIO import StringIO as BytesIO
try:
struct.unpack(b">i", b"idgi")
spack = struct.pack
sunpack = struct.unpack
except:
def spack(f, *a, **ka):
return struct.pack(f.decode("ascii"), *a, **ka)
def sunpack(f, *a, **ka):
return struct.unpack(f.decode("ascii"), *a, **ka)
surrogateescape.register_surrogateescape()
FS_ENCODING = sys.getfilesystemencoding()
if WINDOWS and PY2:
@@ -123,20 +137,6 @@ REKOBO_KEY = {
REKOBO_LKEY = {k.lower(): v for k, v in REKOBO_KEY.items()}
class Counter(object):
def __init__(self, v=0):
self.v = v
self.mutex = threading.Lock()
def add(self, delta=1):
with self.mutex:
self.v += delta
def set(self, absval):
with self.mutex:
self.v = absval
class Cooldown(object):
def __init__(self, maxage):
self.maxage = maxage
@@ -231,7 +231,7 @@ def nuprint(msg):
def rice_tid():
tid = threading.current_thread().ident
c = struct.unpack(b"B" * 5, struct.pack(b">Q", tid)[-5:])
c = sunpack(b"B" * 5, spack(b">Q", tid)[-5:])
return "".join("\033[1;37;48;5;{}m{:02x}".format(x, x) for x in c) + "\033[0m"
@@ -284,13 +284,11 @@ def alltrace():
def min_ex():
et, ev, tb = sys.exc_info()
tb = traceback.extract_tb(tb, 2)
ex = [
"{} @ {} <{}>: {}".format(fp.split(os.sep)[-1], ln, fun, txt)
for fp, ln, fun, txt in tb
]
ex.append("{}: {}".format(et.__name__, ev))
return "\n".join(ex)
tb = traceback.extract_tb(tb)
fmt = "{} @ {} <{}>: {}"
ex = [fmt.format(fp.split(os.sep)[-1], ln, fun, txt) for fp, ln, fun, txt in tb]
ex.append("[{}] {}".format(et.__name__, ev))
return "\n".join(ex[-8:])
@contextlib.contextmanager
@@ -904,16 +902,10 @@ def yieldfile(fn):
yield buf
def hashcopy(actor, fin, fout):
is_mp = actor.is_mp
def hashcopy(fin, fout):
hashobj = hashlib.sha512()
tlen = 0
for buf in fin:
if is_mp:
actor.workload += 1
if actor.workload > 2 ** 31:
actor.workload = 100
tlen += len(buf)
hashobj.update(buf)
fout.write(buf)
@@ -924,15 +916,10 @@ def hashcopy(actor, fin, fout):
return tlen, hashobj.hexdigest(), digest_b64
def sendfile_py(lower, upper, f, s, actor=None):
def sendfile_py(lower, upper, f, s):
remains = upper - lower
f.seek(lower)
while remains > 0:
if actor:
actor.workload += 1
if actor.workload > 2 ** 31:
actor.workload = 100
# time.sleep(0.01)
buf = f.read(min(1024 * 32, remains))
if not buf:
@@ -979,8 +966,7 @@ def statdir(logger, scandir, lstat, top):
try:
yield [fsdec(fh.name), fh.stat(follow_symlinks=not lstat)]
except Exception as ex:
msg = "scan-stat: \033[36m{} @ {}"
logger(msg.format(repr(ex), fsdec(fh.path)))
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(fh.path)), 6)
else:
src = "listdir"
fun = os.lstat if lstat else os.stat
@@ -989,11 +975,10 @@ def statdir(logger, scandir, lstat, top):
try:
yield [fsdec(name), fun(abspath)]
except Exception as ex:
msg = "list-stat: \033[36m{} @ {}"
logger(msg.format(repr(ex), fsdec(abspath)))
logger(src, "[s] {} @ {}".format(repr(ex), fsdec(abspath)), 6)
except Exception as ex:
logger("{}: \033[31m{} @ {}".format(src, repr(ex), top))
logger(src, "{} @ {}".format(repr(ex), top), 1)
def unescape_cookie(orig):
@@ -1035,7 +1020,7 @@ def guess_mime(url, fallback="application/octet-stream"):
if ";" not in ret:
if ret.startswith("text/") or ret.endswith("/javascript"):
ret += "; charset=UTF-8"
return ret
@@ -1070,10 +1055,7 @@ def gzip_orig_sz(fn):
with open(fsenc(fn), "rb") as f:
f.seek(-4, 2)
rv = f.read(4)
try:
return struct.unpack(b"I", rv)[0]
except:
return struct.unpack("I", rv)[0]
return sunpack(b"I", rv)[0]
def py_desc():

View File

@@ -28,7 +28,8 @@ window.baguetteBox = (function () {
isOverlayVisible = false,
touch = {}, // start-pos
touchFlag = false, // busy
regex = /.+\.(gif|jpe?g|png|webp)/i,
re_i = /.+\.(gif|jpe?g|png|webp)(\?|$)/i,
re_v = /.+\.(webm|mp4)(\?|$)/i,
data = {}, // all galleries
imagesElements = [],
documentLastFocus = null;
@@ -96,10 +97,6 @@ window.baguetteBox = (function () {
data[selector] = selectorData;
[].forEach.call(galleryNodeList, function (galleryElement) {
if (userOptions && userOptions.filter) {
regex = userOptions.filter;
}
var tagsNodeList = [];
if (galleryElement.tagName === 'A') {
tagsNodeList = [galleryElement];
@@ -109,7 +106,7 @@ window.baguetteBox = (function () {
tagsNodeList = [].filter.call(tagsNodeList, function (element) {
if (element.className.indexOf(userOptions && userOptions.ignoreClass) === -1) {
return regex.test(element.href);
return re_i.test(element.href) || re_v.test(element.href);
}
});
if (tagsNodeList.length === 0) {
@@ -119,7 +116,7 @@ window.baguetteBox = (function () {
var gallery = [];
[].forEach.call(tagsNodeList, function (imageElement, imageIndex) {
var imageElementClickHandler = function (event) {
if (event && event.ctrlKey)
if (event && (event.ctrlKey || event.metaKey))
return true;
event.preventDefault ? event.preventDefault() : event.returnValue = false;
@@ -209,24 +206,36 @@ window.baguetteBox = (function () {
bindEvents();
}
function keyDownHandler(event) {
switch (event.keyCode) {
case 37: // Left
showPreviousImage();
break;
case 39: // Right
showNextImage();
break;
case 27: // Esc
hideOverlay();
break;
case 36: // Home
showFirstImage(event);
break;
case 35: // End
showLastImage(event);
break;
}
function keyDownHandler(e) {
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
return;
var k = e.code + '';
if (k == "ArrowLeft" || k == "KeyJ")
showPreviousImage();
else if (k == "ArrowRight" || k == "KeyL")
showNextImage();
else if (k == "Escape")
hideOverlay();
else if (k == "Home")
showFirstImage(e);
else if (k == "End")
showLastImage(e);
else if (k == "Space" || k == "KeyP" || k == "KeyK")
playpause();
else if (k == "KeyU" || k == "KeyO")
relseek(k == "KeyU" ? -10 : 10);
}
function keyUpHandler(e) {
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
return;
var k = e.code + '';
if (k == "Space")
ev(e);
}
var passiveSupp = false;
@@ -325,6 +334,7 @@ window.baguetteBox = (function () {
}
bind(document, 'keydown', keyDownHandler);
bind(document, 'keyup', keyUpHandler);
currentIndex = chosenImageIndex;
touch = {
count: 0,
@@ -366,6 +376,7 @@ window.baguetteBox = (function () {
function hideOverlay(e) {
ev(e);
playvid(false);
if (options.noScrollbars) {
document.documentElement.style.overflowY = 'auto';
document.body.style.overflowY = 'auto';
@@ -375,6 +386,7 @@ window.baguetteBox = (function () {
}
unbind(document, 'keydown', keyDownHandler);
unbind(document, 'keyup', keyUpHandler);
// Fade out and hide the overlay
overlay.className = '';
setTimeout(function () {
@@ -398,8 +410,8 @@ window.baguetteBox = (function () {
return; // out-of-bounds or gallery dirty
}
if (imageContainer.getElementsByTagName('img')[0]) {
// image is loaded, cb and bail
if (imageContainer.querySelector('img, video')) {
// was loaded, cb and bail
if (callback) {
callback();
}
@@ -408,7 +420,7 @@ window.baguetteBox = (function () {
var imageElement = galleryItem.imageElement,
imageSrc = imageElement.href,
thumbnailElement = imageElement.getElementsByTagName('img')[0],
thumbnailElement = imageElement.querySelector('img, video'),
imageCaption = typeof options.captions === 'function' ?
options.captions.call(currentGallery, imageElement) :
imageElement.getAttribute('data-caption') || imageElement.title;
@@ -428,16 +440,20 @@ window.baguetteBox = (function () {
}
imageContainer.appendChild(figure);
var image = mknod('img');
image.onload = function () {
var is_vid = re_v.test(imageSrc),
image = mknod(is_vid ? 'video' : 'img');
clmod(imageContainer, 'vid', is_vid);
image.addEventListener(is_vid ? 'loadedmetadata' : 'load', function () {
// Remove loader element
var spinner = document.querySelector('#baguette-img-' + index + ' .baguetteBox-spinner');
figure.removeChild(spinner);
if (!options.async && callback) {
if (!options.async && callback)
callback();
}
};
});
image.setAttribute('src', imageSrc);
image.setAttribute('controls', 'controls');
image.alt = thumbnailElement ? thumbnailElement.alt || '' : '';
if (options.titleTag && imageCaption) {
image.title = imageCaption;
@@ -498,6 +514,7 @@ window.baguetteBox = (function () {
return false;
}
playvid(false);
currentIndex = index;
loadImage(currentIndex, function () {
preloadNext(currentIndex);
@@ -512,6 +529,26 @@ window.baguetteBox = (function () {
return true;
}
function vid() {
return imagesElements[currentIndex].querySelector('video');
}
function playvid(play) {
if (vid())
vid()[play ? 'play' : 'pause']();
}
function playpause() {
var v = vid();
if (v)
v[v.paused ? "play" : "pause"]();
}
function relseek(sec) {
if (vid())
vid().currentTime += sec;
}
/**
* Triggers the bounce animation
* @param {('left'|'right')} direction - Direction of the movement
@@ -534,6 +571,8 @@ window.baguetteBox = (function () {
} else {
slider.style.transform = 'translate3d(' + offset + ',0,0)';
}
playvid(false);
playvid(true);
}
function preloadNext(index) {
@@ -566,6 +605,7 @@ window.baguetteBox = (function () {
unbindEvents();
clearCachedData();
unbind(document, 'keydown', keyDownHandler);
unbind(document, 'keyup', keyUpHandler);
document.getElementsByTagName('body')[0].removeChild(ebi('baguetteBox-overlay'));
data = {};
currentGallery = [];
@@ -577,6 +617,8 @@ window.baguetteBox = (function () {
show: show,
showNext: showNextImage,
showPrevious: showPreviousImage,
relseek: relseek,
playpause: playpause,
hide: hideOverlay,
destroy: destroyPlugin
};

View File

@@ -29,10 +29,10 @@ body {
position: fixed;
max-width: 34em;
background: #222;
border: 0 solid #555;
border: 0 solid #777;
overflow: hidden;
margin-top: 1em;
padding: 0 1em;
padding: 0 1.3em;
height: 0;
opacity: .1;
transition: opacity 0.14s, height 0.14s, padding 0.14s;
@@ -40,19 +40,31 @@ body {
border-radius: .4em;
z-index: 9001;
}
#tt.b {
padding: 0 2em;
border-radius: .5em;
box-shadow: 0 .2em 1em #000;
}
#tt.show {
padding: 1em;
padding: 1em 1.3em;
border-width: .4em 0;
height: auto;
border-width: .2em 0;
opacity: 1;
}
#tt.show.b {
padding: 1.5em 2em;
border-width: .5em 0;
}
#tt code {
background: #3c3c3c;
padding: .2em .3em;
padding: .1em .3em;
border-top: 1px solid #777;
border-radius: .3em;
font-family: monospace, monospace;
line-height: 2em;
line-height: 1.7em;
}
#tt em {
color: #f6a;
}
#path,
#path * {
@@ -607,7 +619,7 @@ input.eq_gain {
#srch_q {
white-space: pre;
color: #f80;
height: 1em;
min-height: 1em;
margin: .2em 0 -1em 1.6em;
}
#tq_raw {
@@ -812,11 +824,13 @@ input.eq_gain {
border-bottom: 1px solid #555;
}
#thumbs,
#au_osd_cv {
#au_osd_cv,
#u2tdate {
opacity: .3;
}
#griden.on+#thumbs,
#au_os_ctl.on+#au_osd_cv {
#au_os_ctl.on+#au_osd_cv,
#u2turbo.on+#u2tdate {
opacity: 1;
}
#ghead {
@@ -921,13 +935,16 @@ html.light {
}
html.light #tt {
background: #fff;
border-color: #888;
border-color: #888 #000 #777 #000;
box-shadow: 0 .3em 1em rgba(0,0,0,0.4);
}
html.light #tt code {
background: #060;
color: #fff;
}
html.light #tt em {
color: #d38;
}
html.light #ops,
html.light .opbox,
html.light #srch_form {
@@ -1157,7 +1174,8 @@ html.light #tree::-webkit-scrollbar {
margin: 0;
height: 100%;
}
#baguetteBox-overlay .full-image img {
#baguetteBox-overlay .full-image img,
#baguetteBox-overlay .full-image video {
display: inline-block;
width: auto;
height: auto;
@@ -1166,6 +1184,9 @@ html.light #tree::-webkit-scrollbar {
vertical-align: middle;
box-shadow: 0 0 8px rgba(0, 0, 0, 0.6);
}
#baguetteBox-overlay .full-image video {
background: #333;
}
#baguetteBox-overlay .full-image figcaption {
display: block;
position: absolute;

View File

@@ -6,10 +6,10 @@
<title>⇆🎉 {{ title }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/browser.css{{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/upload.css{{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/browser.css?_={{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/upload.css?_={{ ts }}">
{%- if css %}
<link rel="stylesheet" type="text/css" media="screen" href="{{ css }}{{ ts }}">
<link rel="stylesheet" type="text/css" media="screen" href="{{ css }}?_={{ ts }}">
{%- endif %}
</head>
@@ -110,7 +110,7 @@
<div id="epi" class="logue">{{ logues[1] }}</div>
<h2><a href="?h">control-panel</a></h2>
<h2><a href="/?h">control-panel</a></h2>
</div>
@@ -127,9 +127,9 @@
have_tags_idx = {{ have_tags_idx|tojson }},
have_zip = {{ have_zip|tojson }};
</script>
<script src="/.cpr/util.js{{ ts }}"></script>
<script src="/.cpr/browser.js{{ ts }}"></script>
<script src="/.cpr/up2k.js{{ ts }}"></script>
<script src="/.cpr/util.js?_={{ ts }}"></script>
<script src="/.cpr/browser.js?_={{ ts }}"></script>
<script src="/.cpr/up2k.js?_={{ ts }}"></script>
</body>
</html>

View File

@@ -78,7 +78,7 @@ ebi('op_up2k').innerHTML = (
' <tr>\n' +
' <td>\n' +
' <a href="#" id="nthread_sub">&ndash;</a><input\n' +
' class="txtbox" id="nthread" value="2"/><a\n' +
' class="txtbox" id="nthread" value="2" tt="pause uploads by setting it to 0"/><a\n' +
' href="#" id="nthread_add">+</a><br />&nbsp;\n' +
' </td>\n' +
' </tr>\n' +
@@ -133,6 +133,13 @@ ebi('op_cfg').innerHTML = (
(have_zip ? (
'<div><h3>folder download</h3><div id="arc_fmt"></div></div>\n'
) : '') +
'<div>\n' +
' <h3>up2k switches</h3>\n' +
' <div>\n' +
' <a id="u2turbo" class="tgl btn ttb" href="#" tt="the yolo button, you probably DO NOT want to enable this:$N$Nuse this if you were uploading a huge amount of files and had to restart for some reason, and want to continue the upload ASAP$N$Nthis replaces the hash-check with a simple <em>&quot;does this have the same filesize on the server?&quot;</em> so if the file contents are different it will NOT be uploaded$N$Nyou should turn this off when the upload is done, and then &quot;upload&quot; the same files again to let the client verify them">turbo</a>\n' +
' <a id="u2tdate" class="tgl btn ttb" href="#" tt="has no effect unless the turbo button is enabled$N$Nreduces the yolo factor by a tiny amount; checks whether the file timestamps on the server matches yours$N$Nshould <em>theoretically</em> catch most unfinished/corrupted uploads, but is not a substitute for doing a verification pass with turbo disabled afterwards">date-chk</a>\n' +
' </div>\n' +
'</div>\n' +
'<div><h3>key notation</h3><div id="key_notation"></div></div>\n' +
'<div class="fill"><h3>hidden columns</h3><div id="hcols"></div></div>'
);
@@ -237,6 +244,10 @@ var mpl = (function () {
'<a href="#" class="tgl btn" tt="load the next folder and continue">📂 next-folder</a>' +
'</div></div>' +
'<div><h3>tint</h3><div>' +
'<input type="text" id="pb_tint" size="3" value="0" tt="background level (0-100) on the seekbar$Nto make buffering less distracting" />' +
'</div></div>' +
'<div><h3>audio equalizer</h3><div id="audio_eq"></div></div>');
var r = {
@@ -290,6 +301,19 @@ var mpl = (function () {
draw_pb_mode();
}
function set_tint() {
var tint = icfg_get('pb_tint', 0);
if (!tint)
ebi('barbuf').style.removeProperty('background');
else
ebi('barbuf').style.background = 'rgba(126,163,75,' + (tint / 100.0) + ')';
}
ebi('pb_tint').oninput = function (e) {
swrite('pb_tint', this.value);
set_tint();
};
set_tint();
r.pp = function () {
if (!r.os_ctl)
return;
@@ -1528,7 +1552,7 @@ var thegrid = (function () {
setsz();
function gclick(e) {
if (e && e.ctrlKey)
if (e && (e.ctrlKey || e.metaKey))
return true;
var oth = ebi(this.getAttribute('ref')),
@@ -1729,10 +1753,14 @@ document.onkeydown = function (e) {
if (!document.activeElement || document.activeElement != document.body && document.activeElement.nodeName.toLowerCase() != 'a')
return;
if (e.ctrlKey || e.altKey || e.shiftKey || e.metaKey || e.isComposing)
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
return;
var k = e.code + '', pos = -1, n;
if (e.shiftKey && k != 'KeyA' && k != 'KeyD')
return;
var k = (e.code + ''), pos = -1, n;
if (k.indexOf('Digit') === 0)
pos = parseInt(k.slice(-1)) * 0.1;
@@ -1768,6 +1796,14 @@ document.onkeydown = function (e) {
if (k == 'KeyT')
return ebi('thumbs').click();
if (!treectl.hidden && (!e.shiftKey || !thegrid.en)) {
if (k == 'KeyA')
return QS('#twig').click();
if (k == 'KeyD')
return QS('#twobytwo').click();
}
if (thegrid.en) {
if (k == 'KeyS')
return ebi('gridsel').click();
@@ -1850,6 +1886,7 @@ document.onkeydown = function (e) {
}
var search_timeout,
defer_timeout,
search_in_progress = 0;
function ev_search_input() {
@@ -1864,9 +1901,29 @@ document.onkeydown = function (e) {
if (id != "q_raw")
encode_query();
clearTimeout(search_timeout);
if (Date.now() - search_in_progress > 30 * 1000)
set_vq();
clearTimeout(defer_timeout);
defer_timeout = setTimeout(try_search, 2000);
try_search();
}
function try_search() {
if (Date.now() - search_in_progress > 30 * 1000) {
clearTimeout(defer_timeout);
clearTimeout(search_timeout);
search_timeout = setTimeout(do_search, 200);
}
}
function set_vq() {
if (search_in_progress)
return;
var q = ebi('q_raw').value,
vq = ebi('files').getAttribute('q_raw');
srch_msg(false, (q == vq) ? '' : 'search results below are from a previous query:\n ' + (vq ? vq : '(*)'));
}
function encode_query() {
@@ -1936,7 +1993,8 @@ document.onkeydown = function (e) {
xhr.setRequestHeader('Content-Type', 'text/plain');
xhr.onreadystatechange = xhr_search_results;
xhr.ts = Date.now();
xhr.send(JSON.stringify({ "q": ebi('q_raw').value }));
xhr.q_raw = ebi('q_raw').value;
xhr.send(JSON.stringify({ "q": xhr.q_raw }));
}
function xhr_search_results() {
@@ -2007,6 +2065,8 @@ document.onkeydown = function (e) {
ofiles.innerHTML = html.join('\n');
ofiles.setAttribute("ts", this.ts);
ofiles.setAttribute("q_raw", this.q_raw);
set_vq();
mukey.render();
reload_browser();
filecols.set_style(['File Name']);
@@ -2018,6 +2078,7 @@ document.onkeydown = function (e) {
ev(e);
treectl.show();
ebi('files').innerHTML = orig_html;
ebi('files').removeAttribute('q_raw');
orig_html = null;
msel.render();
reload_browser();
@@ -2236,6 +2297,9 @@ var treectl = (function () {
}
function treego(e) {
if (e && (e.ctrlKey || e.metaKey))
return true;
ev(e);
if (this.getAttribute('class') == 'hl' &&
this.previousSibling.textContent == '-') {

View File

@@ -54,7 +54,7 @@
<div>{{ logues[1] }}</div><br />
{%- endif %}
<h2><a href="{{ url_suf }}{{ url_suf and '&amp;' or '?' }}h">control-panel</a></h2>
<h2><a href="/{{ url_suf }}{{ url_suf and '&amp;' or '?' }}h">control-panel</a></h2>
</body>
</html>

View File

@@ -3,9 +3,9 @@
<title>📝🎉 {{ title }}</title> <!-- 📜 -->
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.7">
<link href="/.cpr/md.css" rel="stylesheet">
<link href="/.cpr/md.css?_={{ ts }}" rel="stylesheet">
{%- if edit %}
<link href="/.cpr/md2.css" rel="stylesheet">
<link href="/.cpr/md2.css?_={{ ts }}" rel="stylesheet">
{%- endif %}
</head>
<body>
@@ -146,10 +146,10 @@ var md_opt = {
})();
</script>
<script src="/.cpr/util.js"></script>
<script src="/.cpr/deps/marked.js"></script>
<script src="/.cpr/md.js"></script>
<script src="/.cpr/util.js?_={{ ts }}"></script>
<script src="/.cpr/deps/marked.js?_={{ ts }}"></script>
<script src="/.cpr/md.js?_={{ ts }}"></script>
{%- if edit %}
<script src="/.cpr/md2.js"></script>
<script src="/.cpr/md2.js?_={{ ts }}"></script>
{%- endif %}
</body></html>

View File

@@ -3,9 +3,9 @@
<title>📝🎉 {{ title }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.7">
<link href="/.cpr/mde.css" rel="stylesheet">
<link href="/.cpr/deps/mini-fa.css" rel="stylesheet">
<link href="/.cpr/deps/easymde.css" rel="stylesheet">
<link href="/.cpr/mde.css?_={{ ts }}" rel="stylesheet">
<link href="/.cpr/deps/mini-fa.css?_={{ ts }}" rel="stylesheet">
<link href="/.cpr/deps/easymde.css?_={{ ts }}" rel="stylesheet">
</head>
<body>
<div id="mw">
@@ -43,7 +43,7 @@ var lightswitch = (function () {
})();
</script>
<script src="/.cpr/util.js"></script>
<script src="/.cpr/deps/easymde.js"></script>
<script src="/.cpr/mde.js"></script>
<script src="/.cpr/util.js?_={{ ts }}"></script>
<script src="/.cpr/deps/easymde.js?_={{ ts }}"></script>
<script src="/.cpr/mde.js?_={{ ts }}"></script>
</body></html>

View File

@@ -6,7 +6,7 @@
<title>copyparty</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/msg.css">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/msg.css?_={{ ts }}">
</head>
<body>

View File

@@ -6,7 +6,7 @@
<title>copyparty</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=0.8">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/splash.css">
<link rel="stylesheet" type="text/css" media="screen" href="/.cpr/splash.css?_={{ ts }}">
</head>
<body>
@@ -35,7 +35,7 @@
</table>
</td></tr></table>
<div class="btns">
<a href="{{ avol[0] }}?stack">dump stack</a>
<a href="/?stack">dump stack</a>
</div>
{%- endif %}

View File

@@ -1,7 +1,5 @@
"use strict";
window.onerror = vis_exh;
function goto_up2k() {
if (up2k === false)
@@ -16,17 +14,19 @@ function goto_up2k() {
// chrome requires https to use crypto.subtle,
// usually it's undefined but some chromes throw on invoke
var up2k = null;
var sha_js = window.WebAssembly ? 'hw' : 'ac'; // ff53,c57,sa11
var up2k = null,
sha_js = window.WebAssembly ? 'hw' : 'ac', // ff53,c57,sa11
m = 'will use ' + sha_js + ' instead of native sha512 due to';
try {
var cf = crypto.subtle || crypto.webkitSubtle;
cf.digest('SHA-512', new Uint8Array(1)).then(
function (x) { console.log('sha-ok'); up2k = up2k_init(cf); },
function (x) { console.log('sha-ng:', x); up2k = up2k_init(false); }
function (x) { console.log(m, x); up2k = up2k_init(false); }
);
}
catch (ex) {
console.log('sha-na:', ex);
console.log(m, ex);
try {
up2k = up2k_init(false);
}
@@ -142,7 +142,7 @@ function U2pvis(act, btns) {
this.tail = -1;
this.wsz = 3;
this.addfile = function (entry, sz) {
this.addfile = function (entry, sz, draw) {
this.tab.push({
"hn": entry[0],
"ht": entry[1],
@@ -156,6 +156,9 @@ function U2pvis(act, btns) {
"bd0": 0 // upload start
});
this.ctr["q"]++;
if (!draw)
return;
this.drawcard("q");
if (this.act == "q") {
this.addrow(this.tab.length - 1);
@@ -222,7 +225,7 @@ function U2pvis(act, btns) {
this.hashed = function (fobj) {
var fo = this.tab[fobj.n],
nb = fo.bt * (++fo.nh / fo.cb.length),
p = this.perc(nb, 0, fobj.size, fobj.t1);
p = this.perc(nb, 0, fobj.size, fobj.t_hashing);
fo.hp = '{0}%, {1}, {2} MB/s'.format(
p[0].toFixed(2), p[1], p[2].toFixed(2)
@@ -245,7 +248,7 @@ function U2pvis(act, btns) {
fo.cb[nchunk] = cbd;
fo.bd += delta;
var p = this.perc(fo.bd, fo.bd0, fo.bt, fobj.t3);
var p = this.perc(fo.bd, fo.bd0, fo.bt, fobj.t_uploading);
fo.hp = '{0}%, {1}, {2} MB/s'.format(
p[0].toFixed(2), p[1], p[2].toFixed(2)
);
@@ -256,6 +259,41 @@ function U2pvis(act, btns) {
var obj = ebi('f{0}p'.format(fobj.n)),
o1 = p[0] - 2, o2 = p[0] - 0.1, o3 = p[0];
if (!obj) { //} || true) {
var msg = [
"act", this.act,
"in", fo.in,
"is_act", this.is_act(fo.in),
"head", this.head,
"tail", this.tail,
"nfile", fobj.n,
"name", fobj.name,
"sz", fobj.size,
"bytesDelta", delta,
"bytesDone", fo.bd,
],
m2 = '',
ds = QSA("#u2tab>tbody>tr>td:first-child>a:last-child");
for (var a = 0; a < msg.length; a += 2)
m2 += msg[a] + '=' + msg[a + 1] + ', ';
console.log(m2);
for (var a = 0, aa = ds.length; a < aa; a++) {
var id = ds[a].parentNode.getAttribute('id').slice(1, -1);
console.log("dom %d/%d = [%s] in(%s) is_act(%s) %s",
a, aa, id, this.tab[id].in, this.is_act(fo.in), ds[a].textContent);
}
for (var a = 0, aa = this.tab.length; a < aa; a++)
if (this.is_act(this.tab[a].in))
console.log("tab %d/%d = sz %s", a, aa, this.tab[a].bt);
console.log("a");
throw 42;
}
obj.innerHTML = fo.hp;
obj.style.color = '#fff';
obj.style.background = 'linear-gradient(90deg, #050, #270 ' + o1 + '%, #4b0 ' + o2 + '%, #333 ' + o3 + '%, #333 99%, #777)';
@@ -270,26 +308,35 @@ function U2pvis(act, btns) {
throw 42;
}
//console.log("oldcat %s %d, newcat %s %d, head=%d, tail=%d, file=%d, act.old=%s, act.new=%s, bz_act=%s",
// oldcat, this.ctr[oldcat],
// newcat, this.ctr[newcat],
// this.head, this.tail, nfile,
// this.is_act(oldcat), this.is_act(newcat), bz_act);
fo.in = newcat;
this.ctr[oldcat]--;
this.ctr[newcat]++;
this.drawcard(oldcat);
this.drawcard(newcat);
if (this.is_act(newcat)) {
this.tail++;
this.tail = Math.max(this.tail, nfile + 1);
if (!ebi('f' + nfile))
this.addrow(nfile);
}
else if (this.is_act(oldcat)) {
this.head++;
while (this.head < Math.min(this.tab.length, this.tail) && this.precard[this.tab[this.head].in])
this.head++;
if (!bz_act) {
var tr = ebi("f" + nfile);
tr.parentNode.removeChild(tr);
}
}
if (bz_act) {
else return;
if (bz_act)
this.bzw();
}
};
this.bzw = function () {
@@ -303,7 +350,8 @@ function U2pvis(act, btns) {
while (this.head - first > this.wsz) {
var obj = ebi('f' + (first++));
obj.parentNode.removeChild(obj);
if (obj)
obj.parentNode.removeChild(obj);
}
while (last - this.tail < this.wsz && last < this.tab.length - 2) {
var obj = ebi('f' + (++last));
@@ -336,6 +384,8 @@ function U2pvis(act, btns) {
this.changecard = function (card) {
this.act = card;
this.precard = has(["ok", "ng", "done"], this.act) ? {} : this.act == "bz" ? { "ok": 1, "ng": 1 } : { "ok": 1, "ng": 1, "bz": 1 };
this.postcard = has(["ok", "ng", "done"], this.act) ? { "bz": 1, "q": 1 } : this.act == "bz" ? { "q": 1 } : {};
this.head = -1;
this.tail = -1;
var html = [];
@@ -350,9 +400,23 @@ function U2pvis(act, btns) {
}
}
if (this.head == -1) {
this.head = this.tab.length;
this.tail = this.head - 1;
for (var a = 0; a < this.tab.length; a++) {
var rt = this.tab[a].in;
if (this.precard[rt]) {
this.head = a + 1;
this.tail = a;
}
else if (this.postcard[rt]) {
this.head = a;
this.tail = a - 1;
break;
}
}
}
if (this.head < 0)
this.head = 0;
if (card == "bz") {
for (var a = this.head - 1; a >= this.head - this.wsz && a >= 0; a--) {
html.unshift(this.genrow(a, true).replace(/><td>/, "><td>a "));
@@ -399,6 +463,8 @@ function U2pvis(act, btns) {
that.changecard(newtab);
};
}
this.changecard(this.act);
}
@@ -495,17 +561,21 @@ function up2k_init(subtle) {
ask_up = bcfg_get('ask_up', true),
flag_en = bcfg_get('flag_en', false),
fsearch = bcfg_get('fsearch', false),
turbo = bcfg_get('u2turbo', false),
datechk = bcfg_get('u2tdate', true),
fdom_ctr = 0,
min_filebuf = 0;
var st = {
"files": [],
"todo": {
"head": [],
"hash": [],
"handshake": [],
"upload": []
},
"busy": {
"head": [],
"hash": [],
"handshake": [],
"upload": []
@@ -516,6 +586,15 @@ function up2k_init(subtle) {
}
};
function push_t(arr, t) {
var sort = arr.length && arr[arr.length - 1].n > t.n;
arr.push(t);
if (sort)
arr.sort(function (a, b) {
return a.n < b.n ? -1 : 1;
});
}
var pvis = new U2pvis("bz", '#u2cards');
var bobslice = null;
@@ -559,7 +638,7 @@ function up2k_init(subtle) {
}
else files = e.target.files;
if (!files || files.length == 0)
if (!files || !files.length)
return alert('no files selected??');
more_one_file();
@@ -598,14 +677,50 @@ function up2k_init(subtle) {
}
}
function read_dirs(rd, pf, dirs, good, bad) {
function rd_flatten(pf, dirs) {
var ret = jcp(pf);
for (var a = 0; a < dirs.length; a++)
ret.push(dirs.fullPath || '');
ret.sort();
return ret;
}
var rd_missing_ref = [];
function read_dirs(rd, pf, dirs, good, bad, spins) {
spins = spins || 0;
if (++spins == 5)
rd_missing_ref = rd_flatten(pf, dirs);
if (spins == 200) {
var missing = rd_flatten(pf, dirs),
match = rd_missing_ref.length == missing.length,
aa = match ? missing.length : 0;
missing.sort();
for (var a = 0; a < aa; a++)
if (rd_missing_ref[a] != missing[a])
match = false;
if (match) {
var msg = ['directory iterator got stuck on the following {0} items; good chance your browser is about to spinlock:'.format(missing.length)];
for (var a = 0; a < Math.min(20, missing.length); a++)
msg.push(missing[a]);
alert(msg.join('\n-- '));
dirs = [];
pf = [];
}
spins = 0;
}
if (!dirs.length) {
if (!pf.length)
return gotallfiles(good, bad);
console.log("retry pf, " + pf.length);
setTimeout(function () {
read_dirs(rd, pf, dirs, good, bad);
read_dirs(rd, pf, dirs, good, bad, spins);
}, 50);
return;
}
@@ -626,8 +741,7 @@ function up2k_init(subtle) {
pf.push(name);
dn.file(function (fobj) {
var idx = pf.indexOf(name);
pf.splice(idx, 1);
apop(pf, name);
try {
if (fobj.size > 0) {
good.push([fobj, name]);
@@ -645,12 +759,12 @@ function up2k_init(subtle) {
dirs.shift();
rd = null;
}
return read_dirs(rd, pf, dirs, good, bad);
return read_dirs(rd, pf, dirs, good, bad, spins);
});
}
function gotallfiles(good_files, bad_files) {
if (bad_files.length > 0) {
if (bad_files.length) {
var ntot = bad_files.length + good_files.length,
msg = 'These {0} files (of {1} total) were skipped because they are empty:\n'.format(bad_files.length, ntot);
@@ -670,40 +784,52 @@ function up2k_init(subtle) {
if (ask_up && !fsearch && !confirm(msg.join('\n')))
return;
var seen = {},
evpath = get_evpath(),
draw_each = good_files.length < 50;
for (var a = 0; a < st.files.length; a++)
seen[st.files[a].name + '\n' + st.files[a].size] = 1;
for (var a = 0; a < good_files.length; a++) {
var fobj = good_files[a][0],
now = Date.now(),
lmod = fobj.lastModified || now;
var entry = {
"n": parseInt(st.files.length.toString()),
"n": st.files.length,
"t0": now,
"fobj": fobj,
"name": good_files[a][1],
"size": fobj.size,
"lmod": lmod / 1000,
"purl": get_evpath(),
"purl": evpath,
"done": false,
"hash": []
};
},
key = entry.name + '\n' + entry.size;
var skip = false;
for (var b = 0; b < st.files.length; b++)
if (entry.name == st.files[b].name &&
entry.size == st.files[b].size)
skip = true;
if (skip)
if (seen[key])
continue;
seen[key] = 1;
pvis.addfile([
fsearch ? esc(entry.name) : linksplit(
uricom_dec(entry.purl)[0] + entry.name).join(' '),
'📐 hash',
''
], fobj.size);
], fobj.size, draw_each);
st.files.push(entry);
st.todo.hash.push(entry);
if (turbo)
push_t(st.todo.head, entry);
else
push_t(st.todo.hash, entry);
}
if (!draw_each) {
pvis.drawcard("q");
pvis.changecard(pvis.act);
}
}
ebi('u2btn').addEventListener('drop', gotfile, false);
@@ -739,15 +865,32 @@ function up2k_init(subtle) {
//
function handshakes_permitted() {
var lim = multitask ? 1 : 0;
if (!st.todo.handshake.length)
return true;
if (lim <
st.todo.upload.length +
st.busy.upload.length)
var t = st.todo.handshake[0],
cd = t.cooldown;
if (cd && cd - Date.now() > 0)
return false;
var cd = st.todo.handshake.length ? st.todo.handshake[0].cooldown : 0;
if (cd && cd - Date.now() > 0)
// keepalive or verify
if (t.keepalive ||
t.t_uploaded)
return true;
if (parallel_uploads <
st.busy.handshake.length)
return false;
if (st.busy.handshake.length)
for (var n = t.n - 1; n >= t.n - parallel_uploads && n >= 0; n--)
if (st.files[n].t_uploading)
return false;
if ((multitask ? 1 : 0) <
st.todo.upload.length +
st.busy.upload.length)
return false;
return true;
@@ -781,14 +924,17 @@ function up2k_init(subtle) {
clearTimeout(tto);
running = true;
while (true) {
var is_busy = 0 !=
st.todo.hash.length +
st.todo.handshake.length +
st.todo.upload.length +
st.busy.hash.length +
st.busy.handshake.length +
st.busy.upload.length;
while (window['vis_exh']) {
var now = Date.now(),
is_busy = 0 !=
st.todo.head.length +
st.todo.hash.length +
st.todo.handshake.length +
st.todo.upload.length +
st.busy.head.length +
st.busy.hash.length +
st.busy.handshake.length +
st.busy.upload.length;
if (was_busy != is_busy) {
was_busy = is_busy;
@@ -799,7 +945,6 @@ function up2k_init(subtle) {
if (flag) {
if (is_busy) {
var now = Date.now();
flag.take(now);
if (!flag.ours)
return defer();
@@ -811,43 +956,52 @@ function up2k_init(subtle) {
var mou_ikkai = false;
if (st.busy.handshake.length > 0 &&
st.busy.handshake[0].busied < Date.now() - 30 * 1000
if (st.busy.handshake.length &&
st.busy.handshake[0].t_busied < now - 30 * 1000
) {
console.log("retrying stuck handshake");
var t = st.busy.handshake.shift();
st.todo.handshake.unshift(t);
}
if (st.todo.handshake.length > 0 &&
st.busy.handshake.length == 0 && (
st.todo.handshake[0].t4 || (
handshakes_permitted() &&
st.busy.upload.length < parallel_uploads
)
)
) {
exec_handshake();
var nprev = -1;
for (var a = 0; a < st.todo.upload.length; a++) {
var nf = st.todo.upload[a].nfile;
if (nprev == nf)
continue;
nprev = nf;
var t = st.files[nf];
if (now - t.t_busied > 1000 * 30 &&
now - t.t_handshake > 1000 * (21600 - 1800)
) {
apop(st.todo.handshake, t);
st.todo.handshake.unshift(t);
t.keepalive = true;
}
}
if (st.todo.head.length &&
st.busy.head.length < parallel_uploads) {
exec_head();
mou_ikkai = true;
}
if (handshakes_permitted() &&
st.todo.handshake.length > 0 &&
st.busy.handshake.length == 0 &&
st.busy.upload.length < parallel_uploads) {
st.todo.handshake.length) {
exec_handshake();
mou_ikkai = true;
}
if (st.todo.upload.length > 0 &&
if (st.todo.upload.length &&
st.busy.upload.length < parallel_uploads) {
exec_upload();
mou_ikkai = true;
}
if (hashing_permitted() &&
st.todo.hash.length > 0 &&
st.busy.hash.length == 0) {
st.todo.hash.length &&
!st.busy.hash.length) {
exec_hash();
mou_ikkai = true;
}
@@ -954,7 +1108,7 @@ function up2k_init(subtle) {
bpend += cdr - car;
reader.onload = function (e) {
function orz(e) {
if (!min_filebuf && nch == 1) {
min_filebuf = 1;
var td = Date.now() - t0;
@@ -964,9 +1118,30 @@ function up2k_init(subtle) {
}
}
hash_calc(nch, e.target.result);
}
reader.onload = function (e) {
try { orz(e); } catch (ex) { vis_exh(ex + '', '', '', '', ex); }
};
reader.onerror = function () {
alert('y o u b r o k e i t\nerror: ' + reader.error);
var err = reader.error + '';
var handled = false;
if (err.indexOf('NotReadableError') !== -1 || // win10-chrome defender
err.indexOf('NotFoundError') !== -1 // macos-firefox permissions
) {
pvis.seth(t.n, 1, 'OS-error');
pvis.seth(t.n, 2, err);
handled = true;
}
if (handled) {
pvis.move(t.n, 'ng');
apop(st.busy.hash, t);
st.bytes.uploaded += t.size;
return tasker();
}
alert('y o u b r o k e i t\nfile: ' + t.name + '\nerror: ' + err);
};
reader.readAsArrayBuffer(
bobslice.call(t.fobj, car, cdr));
@@ -994,15 +1169,15 @@ function up2k_init(subtle) {
t.hash.push(hashtab[a]);
}
t.t2 = Date.now();
t.t_hashed = Date.now();
if (t.n == 0 && window.location.hash == '#dbg') {
var spd = (t.size / ((t.t2 - t.t1) / 1000.)) / (1024 * 1024.);
alert('{0} ms, {1} MB/s\n'.format(t.t2 - t.t1, spd.toFixed(3)) + t.hash.join('\n'));
var spd = (t.size / ((t.t_hashed - t.t_hashing) / 1000.)) / (1024 * 1024.);
alert('{0} ms, {1} MB/s\n'.format(t.t_hashed - t.t_hashing, spd.toFixed(3)) + t.hash.join('\n'));
}
pvis.seth(t.n, 2, 'hashing done');
pvis.seth(t.n, 1, '📦 wait');
st.busy.hash.splice(st.busy.hash.indexOf(t), 1);
apop(st.busy.hash, t);
st.todo.handshake.push(t);
tasker();
};
@@ -1025,10 +1200,57 @@ function up2k_init(subtle) {
}, 1);
};
t.t1 = Date.now();
t.t_hashing = Date.now();
segm_next();
}
/////
////
/// head
//
function exec_head() {
var t = st.todo.head.shift();
st.busy.head.push(t);
var xhr = new XMLHttpRequest();
xhr.onerror = function () {
console.log('head onerror, retrying', t);
apop(st.busy.head, t);
st.todo.head.unshift(t);
tasker();
};
function orz(e) {
var ok = false;
if (xhr.status == 200) {
var srv_sz = xhr.getResponseHeader('Content-Length'),
srv_ts = xhr.getResponseHeader('Last-Modified');
ok = t.size == srv_sz;
if (ok && datechk) {
srv_ts = new Date(srv_ts) / 1000;
ok = Math.abs(srv_ts - t.lmod) < 2;
}
}
apop(st.busy.head, t);
if (!ok)
return push_t(st.todo.hash, t);
t.done = true;
st.bytes.hashed += t.size;
st.bytes.uploaded += t.size;
pvis.seth(t.n, 1, 'YOLO');
pvis.seth(t.n, 2, "turbo'd");
pvis.move(t.n, 'ok');
};
xhr.onload = function (e) {
try { orz(e); } catch (ex) { vis_exh(ex + '', '', '', '', ex); }
};
xhr.open('HEAD', t.purl + t.name, true);
xhr.send();
}
/////
////
/// handshake
@@ -1036,30 +1258,41 @@ function up2k_init(subtle) {
function exec_handshake() {
var t = st.todo.handshake.shift(),
keepalive = t.keepalive,
me = Date.now();
st.busy.handshake.push(t);
t.busied = me;
t.keepalive = undefined;
t.t_busied = me;
if (keepalive)
console.log("sending keepalive handshake", t);
var xhr = new XMLHttpRequest();
xhr.onerror = function () {
if (t.busied != me) {
if (t.t_busied != me) {
console.log('zombie handshake onerror,', t);
return;
}
console.log('handshake onerror, retrying');
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
console.log('handshake onerror, retrying', t);
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
t.keepalive = keepalive;
tasker();
};
xhr.onload = function (e) {
if (t.busied != me) {
function orz(e) {
if (t.t_busied != me) {
console.log('zombie handshake onload,', t);
return;
}
if (xhr.status == 200) {
var response = JSON.parse(xhr.responseText);
t.t_handshake = Date.now();
if (keepalive) {
apop(st.busy.handshake, t);
return;
}
var response = JSON.parse(xhr.responseText);
if (!response.name) {
var msg = '',
smsg = '';
@@ -1083,7 +1316,7 @@ function up2k_init(subtle) {
pvis.seth(t.n, 2, msg);
pvis.seth(t.n, 1, smsg);
pvis.move(t.n, smsg == '404' ? 'ng' : 'ok');
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
apop(st.busy.handshake, t);
st.bytes.uploaded += t.size;
t.done = true;
tasker();
@@ -1092,6 +1325,7 @@ function up2k_init(subtle) {
if (response.name !== t.name) {
// file exists; server renamed us
console.log("server-rename [" + t.name + "] to [" + response.name + "]");
t.name = response.name;
pvis.seth(t.n, 0, linksplit(t.purl + t.name).join(' '));
}
@@ -1124,31 +1358,41 @@ function up2k_init(subtle) {
var done = true,
msg = '&#x1f3b7;&#x1f41b;';
if (t.postlist.length > 0) {
if (t.postlist.length) {
var arr = st.todo.upload,
sort = arr.length && arr[arr.length - 1].nfile > t.n;
for (var a = 0; a < t.postlist.length; a++)
st.todo.upload.push({
arr.push({
'nfile': t.n,
'npart': t.postlist[a]
});
msg = 'uploading';
done = false;
if (sort)
arr.sort(function (a, b) {
return a.nfile < b.nfile ? -1 :
/* */ a.nfile > b.nfile ? 1 :
a.npart < b.npart ? -1 : 1;
});
}
pvis.seth(t.n, 1, msg);
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
apop(st.busy.handshake, t);
if (done) {
t.done = true;
st.bytes.uploaded += t.size - t.bytes_uploaded;
var spd1 = (t.size / ((t.t2 - t.t1) / 1000.)) / (1024 * 1024.),
spd2 = (t.size / ((t.t4 - t.t3) / 1000.)) / (1024 * 1024.);
var spd1 = (t.size / ((t.t_hashed - t.t_hashing) / 1000.)) / (1024 * 1024.),
spd2 = (t.size / ((t.t_uploaded - t.t_uploading) / 1000.)) / (1024 * 1024.);
pvis.seth(t.n, 2, 'hash {0}, up {1} MB/s'.format(
spd1.toFixed(2), spd2.toFixed(2)));
pvis.move(t.n, 'ok');
}
else t.t4 = undefined;
else t.t_uploaded = undefined;
tasker();
}
@@ -1167,7 +1411,7 @@ function up2k_init(subtle) {
var penalty = rsp.replace(/.*rate-limit /, "").split(' ')[0];
console.log("rate-limit: " + penalty);
t.cooldown = Date.now() + parseFloat(penalty) * 1000;
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
apop(st.busy.handshake, t);
st.todo.handshake.unshift(t);
return;
}
@@ -1186,7 +1430,7 @@ function up2k_init(subtle) {
pvis.seth(t.n, 2, err);
pvis.move(t.n, 'ng');
st.busy.handshake.splice(st.busy.handshake.indexOf(t), 1);
apop(st.busy.handshake, t);
tasker();
return;
}
@@ -1196,6 +1440,9 @@ function up2k_init(subtle) {
(xhr.responseText && xhr.responseText) ||
"no further information"));
}
}
xhr.onload = function (e) {
try { orz(e); } catch (ex) { vis_exh(ex + '', '', '', '', ex); }
};
var req = {
@@ -1224,8 +1471,8 @@ function up2k_init(subtle) {
var npart = upt.npart,
t = st.files[upt.nfile];
if (!t.t3)
t.t3 = Date.now();
if (!t.t_uploading)
t.t_uploading = Date.now();
pvis.seth(t.n, 1, "🚀 send");
@@ -1236,40 +1483,56 @@ function up2k_init(subtle) {
if (cdr >= t.size)
cdr = t.size;
var xhr = new XMLHttpRequest();
xhr.upload.onprogress = function (xev) {
pvis.prog(t, npart, xev.loaded);
};
xhr.onload = function (xev) {
function orz(xhr) {
var txt = ((xhr.response && xhr.response.err) || xhr.responseText) + '';
if (xhr.status == 200) {
pvis.prog(t, npart, cdr - car);
st.bytes.uploaded += cdr - car;
t.bytes_uploaded += cdr - car;
st.busy.upload.splice(st.busy.upload.indexOf(upt), 1);
t.postlist.splice(t.postlist.indexOf(npart), 1);
if (t.postlist.length == 0) {
t.t4 = Date.now();
pvis.seth(t.n, 1, 'verifying');
st.todo.handshake.unshift(t);
}
tasker();
}
else
else if (txt.indexOf('already got that') !== -1) {
console.log("ignoring dupe-segment error", t);
}
else {
alert("server broke; cu-err {0} on file [{1}]:\n".format(
xhr.status, t.name) + (
(xhr.response && xhr.response.err) ||
(xhr.responseText && xhr.responseText) ||
"no further information"));
};
xhr.open('POST', t.purl + 'chunkpit.php', true);
xhr.setRequestHeader("X-Up2k-Hash", t.hash[npart]);
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
xhr.status, t.name) + (txt || "no further information"));
return;
}
apop(st.busy.upload, upt);
apop(t.postlist, npart);
if (!t.postlist.length) {
t.t_uploaded = Date.now();
pvis.seth(t.n, 1, 'verifying');
st.todo.handshake.unshift(t);
}
tasker();
}
function do_send() {
var xhr = new XMLHttpRequest();
xhr.upload.onprogress = function (xev) {
pvis.prog(t, npart, xev.loaded);
};
xhr.onload = function (xev) {
try { orz(xhr); } catch (ex) { vis_exh(ex + '', '', '', '', ex); }
};
xhr.onerror = function (xev) {
if (!window['vis_exh'])
return;
xhr.responseType = 'text';
xhr.send(bobslice.call(t.fobj, car, cdr));
console.log('chunkpit onerror, retrying', t);
do_send();
};
xhr.open('POST', t.purl + 'chunkpit.php', true);
xhr.setRequestHeader("X-Up2k-Hash", t.hash[npart]);
xhr.setRequestHeader("X-Up2k-Wark", t.wark);
xhr.setRequestHeader('Content-Type', 'application/octet-stream');
if (xhr.overrideMimeType)
xhr.overrideMimeType('Content-Type', 'application/octet-stream');
xhr.responseType = 'text';
xhr.send(bobslice.call(t.fobj, car, cdr));
}
do_send();
}
/////
@@ -1309,6 +1572,17 @@ function up2k_init(subtle) {
}
tt.init();
function bumpthread2(e) {
if (e.ctrlKey || e.altKey || e.metaKey || e.isComposing)
return;
if (e.code == 'ArrowUp')
bumpthread(1);
if (e.code == 'ArrowDown')
bumpthread(-1);
}
function bumpthread(dir) {
try {
dir.stopPropagation();
@@ -1319,7 +1593,7 @@ function up2k_init(subtle) {
if (dir.target) {
clmod(obj, 'err', 1);
var v = Math.floor(parseInt(obj.value));
if (v < 1 || v > 8 || v !== v)
if (v < 0 || v > 64 || v !== v)
return;
parallel_uploads = v;
@@ -1330,11 +1604,11 @@ function up2k_init(subtle) {
parallel_uploads += dir;
if (parallel_uploads < 1)
parallel_uploads = 1;
if (parallel_uploads < 0)
parallel_uploads = 0;
if (parallel_uploads > 8)
parallel_uploads = 8;
if (parallel_uploads > 16)
parallel_uploads = 16;
obj.value = parallel_uploads;
bumpthread({ "target": 1 })
@@ -1354,6 +1628,35 @@ function up2k_init(subtle) {
set_fsearch(!fsearch);
}
function tgl_turbo() {
turbo = !turbo;
bcfg_set('u2turbo', turbo);
draw_turbo();
}
function tgl_datechk() {
datechk = !datechk;
bcfg_set('u2tdate', datechk);
}
function draw_turbo() {
var msgu = '<p class="warn">WARNING: turbo enabled, <span>&nbsp;client may not detect and resume incomplete uploads; see turbo-button tooltip</span></p>',
msgs = '<p class="warn">WARNING: turbo enabled, <span>&nbsp;search may give false-positives; see turbo-button tooltip</span></p>',
msg = fsearch ? msgs : msgu,
omsg = fsearch ? msgu : msgs,
html = ebi('u2foot').innerHTML,
ohtml = html;
if (turbo && html.indexOf(msg) === -1)
html = html.replace(omsg, '') + msg;
else if (!turbo)
html = html.replace(msgu, '').replace(msgs, '');
if (html !== ohtml)
ebi('u2foot').innerHTML = html;
}
draw_turbo();
function set_fsearch(new_state) {
var fixed = false;
@@ -1391,6 +1694,7 @@ function up2k_init(subtle) {
}
catch (ex) { }
draw_turbo();
onresize();
}
@@ -1430,10 +1734,13 @@ function up2k_init(subtle) {
bumpthread(-1);
};
ebi('nthread').onkeydown = bumpthread2;
ebi('nthread').addEventListener('input', bumpthread, false);
ebi('multitask').addEventListener('click', tgl_multitask, false);
ebi('ask_up').addEventListener('click', tgl_ask_up, false);
ebi('flag_en').addEventListener('click', tgl_flag_en, false);
ebi('u2turbo').addEventListener('click', tgl_turbo, false);
ebi('u2tdate').addEventListener('click', tgl_datechk, false);
var o = ebi('fsearch');
if (o)
o.addEventListener('click', tgl_fsearch, false);
@@ -1443,7 +1750,10 @@ function up2k_init(subtle) {
nodes[a].addEventListener('touchend', nop, false);
set_fsearch();
bumpthread({ "target": 1 })
bumpthread({ "target": 1 });
if (parallel_uploads < 1)
bumpthread(1);
return { "init_deps": init_deps, "set_fsearch": set_fsearch }
}

View File

@@ -215,9 +215,31 @@
color: #fff;
font-style: italic;
}
#u2foot .warn {
font-size: 1.3em;
padding: .5em .8em;
margin: 1em -.6em;
color: #f74;
background: #322;
border: 1px solid #633;
border-width: .1em 0;
text-align: center;
}
#u2foot .warn span {
color: #f86;
}
html.light #u2foot .warn {
color: #b00;
background: #fca;
border-color: #f70;
}
html.light #u2foot .warn span {
color: #930;
}
#u2foot span {
color: #999;
font-size: .9em;
font-weight: normal;
}
#u2footfoot {
margin-bottom: -1em;

View File

@@ -11,16 +11,6 @@ var is_touch = 'ontouchstart' in window,
// error handler for mobile devices
function hcroak(msg) {
document.body.innerHTML = msg;
window.onerror = undefined;
throw 'fatal_err';
}
function croak(msg) {
document.body.textContent = msg;
window.onerror = undefined;
throw msg;
}
function esc(txt) {
return txt.replace(/[&"<>]/g, function (c) {
return {
@@ -32,9 +22,12 @@ function esc(txt) {
});
}
function vis_exh(msg, url, lineNo, columnNo, error) {
if (!window.onerror)
return;
window.onerror = undefined;
window['vis_exh'] = null;
var html = ['<h1>you hit a bug!</h1><p>please screenshot this error and send me a copy arigathanks gozaimuch (ed/irc.rizon.net or ed#2644)</p><p>',
var html = ['<h1>you hit a bug!</h1><p>please send me a screenshot arigathanks gozaimuch: <code>ed/irc.rizon.net</code> or <code>ed#2644</code><br />&nbsp; (and if you can, press F12 and include the "Console" tab in the screenshot too)</p><p>',
esc(String(msg)), '</p><p>', esc(url + ' @' + lineNo + ':' + columnNo), '</p>'];
if (error) {
@@ -44,9 +37,13 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
html.push('<h2>' + find[a] + '</h2>' +
esc(String(error[find[a]])).replace(/\n/g, '<br />\n'));
}
document.body.style.fontSize = '0.8em';
document.body.style.padding = '0 1em 1em 1em';
hcroak(html.join('\n'));
document.body.innerHTML = html.join('\n');
var s = mknod('style');
s.innerHTML = 'body{background:#333;color:#ddd;font-family:sans-serif;font-size:0.8em;padding:0 1em 1em 1em} code{color:#bf7;background:#222;padding:.1em;margin:.2em;font-size:1.1em;font-family:monospace,monospace} *{line-height:1.5em}';
document.head.appendChild(s);
throw 'fatal_err';
}
@@ -392,6 +389,18 @@ function has(haystack, needle) {
}
function apop(arr, v) {
var ofs = arr.indexOf(v);
if (ofs !== -1)
arr.splice(ofs, 1);
}
function jcp(obj) {
return JSON.parse(JSON.stringify(obj));
}
function sread(key) {
if (window.localStorage)
return localStorage.getItem(key);
@@ -505,8 +514,10 @@ var tt = (function () {
var pos = this.getBoundingClientRect(),
left = pos.left < window.innerWidth / 2,
top = pos.top < window.innerHeight / 2;
top = pos.top < window.innerHeight / 2,
big = this.className.indexOf(' ttb') !== -1;
clmod(r.tt, 'b', big);
r.tt.style.top = top ? pos.bottom + 'px' : 'auto';
r.tt.style.bottom = top ? 'auto' : (window.innerHeight - pos.top) + 'px';
r.tt.style.left = left ? pos.left + 'px' : 'auto';

51
docs/hls.html Normal file
View File

@@ -0,0 +1,51 @@
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>hls-test</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
</head><body>
<video id="vid" controls></video>
<script src="hls.light.js"></script>
<script>
var video = document.getElementById('vid');
var hls = new Hls({
debug: true,
autoStartLoad: false
});
hls.loadSource('live/v.m3u8');
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED, function() {
hls.startLoad(0);
});
hls.on(Hls.Events.MEDIA_ATTACHED, function() {
video.muted = true;
video.play();
});
/*
general good news:
- doesn't need fixed-length segments; ok to let x264 pick optimal keyframes and slice on those
- hls.js polls the m3u8 for new segments, scales the duration accordingly, seeking works great
- the sfx will grow by 66 KiB since that's how small hls.js can get, wait thats not good
# vod, creates m3u8 at the end, fixed keyframes, v bad
ffmpeg -hide_banner -threads 0 -flags -global_header -i ..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -g 120 -keyint_min 120 -sc_threshold 0 -hls_time 4 -hls_playlist_type vod -hls_segment_filename v%05d.ts v.m3u8
# live, updates m3u8 as it goes, dynamic keyframes, streamable with hls.js
ffmpeg -hide_banner -threads 0 -flags -global_header -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f segment -segment_list v.m3u8 -segment_format mpegts -segment_list_flags live v%05d.ts
# fmp4 (fragmented mp4), doesn't work with hls.js, gets duratoin 149:07:51 (536871s), probably the tkhd/mdhd 0xffffffff (timebase 8000? ok)
ffmpeg -re -hide_banner -threads 0 -flags +cgop -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f segment -segment_list v.m3u8 -segment_format fmp4 -segment_list_flags live v%05d.mp4
# try 2, works, uses tempfiles for m3u8 updates, good, 6% smaller
ffmpeg -re -hide_banner -threads 0 -flags +cgop -i ..\..\CowboyBebopMovie-OP1.webm -vf scale=1280:-4,format=yuv420p -ac 2 -c:a libopus -b:a 128k -c:v libx264 -preset slow -crf 24 -maxrate:v 5M -bufsize:v 10M -f hls -hls_segment_type fmp4 -hls_list_size 0 -hls_segment_filename v%05d.mp4 v.m3u8
more notes
- adding -hls_flags single_file makes duration wack during playback (for both fmp4 and ts), ok once finalized and refreshed, gives no size reduction anyways
- bebop op has good keyframe spacing for testing hls.js, in particular it hops one seg back and immediately resumes if it hits eof with the explicit hls.startLoad(0); otherwise it jumps into the middle of a seg and becomes art
- can probably -c:v copy most of the time, is there a way to check for cgop? todo
*/
</script>
</body></html>

View File

@@ -6,10 +6,10 @@ import re, os, sys, time, shutil, signal, threading, tarfile, hashlib, platform,
import subprocess as sp
"""
run me with any version of python, i will unpack and run copyparty
pls don't edit this file with a text editor,
it breaks the compressed stuff at the end
(but please don't edit this file with a text editor
since that would probably corrupt the binary stuff at the end)
run me with any version of python, i will unpack and run copyparty
there's zero binaries! just plaintext python scripts all the way down
so you can easily unpack the archive and inspect it for shady stuff

View File

@@ -30,6 +30,7 @@ class Cfg(Namespace):
c=c,
rproxy=0,
ed=False,
nw=False,
no_zip=False,
no_scandir=False,
no_sendfile=True,

View File

@@ -17,7 +17,7 @@ from copyparty import util
class Cfg(Namespace):
def __init__(self, a=[], v=[], c=None):
ex = {k: False for k in "e2d e2ds e2dsa e2t e2ts e2tsr".split()}
ex = {k: False for k in "nw e2d e2ds e2dsa e2t e2ts e2tsr".split()}
ex2 = {
"mtp": [],
"mte": "a",

View File

@@ -108,6 +108,9 @@ class VHttpSrv(object):
aliases = ["splash", "browser", "browser2", "msg", "md", "mde"]
self.j2 = {x: J2_FILES for x in aliases}
def cachebuster(self):
return "a"
class VHttpConn(object):
def __init__(self, args, asrv, log, buf):
@@ -121,6 +124,7 @@ class VHttpConn(object):
self.log_src = "a"
self.lf_url = None
self.hsrv = VHttpSrv()
self.nreq = 0
self.nbyte = 0
self.workload = 0
self.ico = None