mirror of
https://github.com/9001/copyparty.git
synced 2025-10-23 16:14:10 +00:00
Compare commits
42 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
726a98100b | ||
|
2f021a0c2b | ||
|
eb05cb6c6e | ||
|
7530af95da | ||
|
8399e95bda | ||
|
3b4dfe326f | ||
|
2e787a254e | ||
|
f888bed1a6 | ||
|
d865e9f35a | ||
|
fc7fe70f66 | ||
|
5aff39d2b2 | ||
|
d1be37a04a | ||
|
b0fd8bf7d4 | ||
|
b9cf8f3973 | ||
|
4588f11613 | ||
|
1a618c3c97 | ||
|
d500a51d97 | ||
|
734e9d3874 | ||
|
bd5cfc2f1b | ||
|
89f88ee78c | ||
|
b2ae14695a | ||
|
19d86b44d9 | ||
|
85be62e38b | ||
|
80f3d90200 | ||
|
0249fa6e75 | ||
|
2d0696e048 | ||
|
ff32ec515e | ||
|
a6935b0293 | ||
|
63eb08ba9f | ||
|
e5b67d2b3a | ||
|
9e10af6885 | ||
|
42bc9115d2 | ||
|
0a569ce413 | ||
|
9a16639a61 | ||
|
57953c68c6 | ||
|
088d08963f | ||
|
7bc8196821 | ||
|
7715299dd3 | ||
|
b8ac9b7994 | ||
|
98e7d8f728 | ||
|
e7fd871ffe | ||
|
14aab62f32 |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -20,3 +20,7 @@ sfx/
|
||||
# derived
|
||||
copyparty/web/deps/
|
||||
srv/
|
||||
|
||||
# state/logs
|
||||
up.*.txt
|
||||
.hist/
|
36
README.md
36
README.md
@@ -53,6 +53,7 @@ turn your phone or raspi into a portable file server with resumable uploads/down
|
||||
* [database location](#database-location) - in-volume (`.hist/up2k.db`, default) or somewhere else
|
||||
* [metadata from audio files](#metadata-from-audio-files) - set `-e2t` to index tags on upload
|
||||
* [file parser plugins](#file-parser-plugins) - provide custom parsers to index additional tags
|
||||
* [upload events](#upload-events) - trigger a script/program on each upload
|
||||
* [complete examples](#complete-examples)
|
||||
* [browser support](#browser-support) - TLDR: yes
|
||||
* [client examples](#client-examples) - interact with copyparty using non-browser clients
|
||||
@@ -595,12 +596,14 @@ note:
|
||||
* `e2tsr` is probably always overkill, since `e2ds`/`e2dsa` would pick up any file modifications and `e2ts` would then reindex those, unless there is a new copyparty version with new parsers and the release note says otherwise
|
||||
* the rescan button in the admin panel has no effect unless the volume has `-e2ds` or higher
|
||||
|
||||
to save some time, you can choose to only index filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash` or the volume-flag `:c,dhash`, this has the following consequences:
|
||||
to save some time, you can provide a regex pattern for filepaths to only index by filename/path/size/last-modified (and not the hash of the file contents) by setting `--no-hash \.iso$` or the volume-flag `:c,nohash=\.iso$`, this has the following consequences:
|
||||
* initial indexing is way faster, especially when the volume is on a network disk
|
||||
* makes it impossible to [file-search](#file-search)
|
||||
* if someone uploads the same file contents, the upload will not be detected as a dupe, so it will not get symlinked or rejected
|
||||
|
||||
if you set `--no-hash`, you can enable hashing for specific volumes using flag `:c,ehash`
|
||||
similarly, you can fully ignore files/folders using `--no-idx [...]` and `:c,noidx=\.iso$`
|
||||
|
||||
if you set `--no-hash [...]` globally, you can enable hashing for specific volumes using flag `:c,nohash=`
|
||||
|
||||
|
||||
## upload rules
|
||||
@@ -699,6 +702,25 @@ copyparty can invoke external programs to collect additional metadata for files
|
||||
* `-mtp arch,built,ver,orig=an,eexe,edll,~/bin/exe.py` runs `~/bin/exe.py` to get properties about windows-binaries only if file is not audio (`an`) and file extension is exe or dll
|
||||
|
||||
|
||||
## upload events
|
||||
|
||||
trigger a script/program on each upload like so:
|
||||
|
||||
```
|
||||
-v /mnt/inc:inc:w:c,mte=+a1:c,mtp=a1=ad,/usr/bin/notify-send
|
||||
```
|
||||
|
||||
so filesystem location `/mnt/inc` shared at `/inc`, write-only for everyone, appending `a1` to the list of tags to index, and using `/usr/bin/notify-send` to "provide" that tag
|
||||
|
||||
that'll run the command `notify-send` with the path to the uploaded file as the first and only argument (so on linux it'll show a notification on-screen)
|
||||
|
||||
note that it will only trigger on new unique files, not dupes
|
||||
|
||||
and it will occupy the parsing threads, so fork anything expensive, or if you want to intentionally queue/singlethread you can combine it with `--no-mtag-mt`
|
||||
|
||||
if this becomes popular maybe there should be a less janky way to do it actually
|
||||
|
||||
|
||||
## complete examples
|
||||
|
||||
* read-only music server with bpm and key scanning
|
||||
@@ -769,6 +791,14 @@ interact with copyparty using non-browser clients
|
||||
* `chunk(){ curl -b cppwd=wark -T- http://127.0.0.1:3923/;}`
|
||||
`chunk <movie.mkv`
|
||||
|
||||
* bash: when curl and wget is not available or too boring
|
||||
* `(printf 'PUT /junk?pw=wark HTTP/1.1\r\n\r\n'; cat movie.mkv) | nc 127.0.0.1 3923`
|
||||
* `(printf 'PUT / HTTP/1.1\r\n\r\n'; cat movie.mkv) >/dev/tcp/127.0.0.1/3923`
|
||||
|
||||
* python: [up2k.py](https://github.com/9001/copyparty/blob/hovudstraum/bin/up2k.py) is a command-line up2k client [(webm)](https://ocv.me/stuff/u2cli.webm)
|
||||
* file uploads, file-search, autoresume of aborted/broken uploads
|
||||
* see [./bin/README.md#up2kpy](bin/README.md#up2kpy)
|
||||
|
||||
* FUSE: mount a copyparty server as a local filesystem
|
||||
* cross-platform python client available in [./bin/](bin/)
|
||||
* [rclone](https://rclone.org/) as client can give ~5x performance, see [./docs/rclone.md](docs/rclone.md)
|
||||
@@ -823,7 +853,7 @@ below are some tweaks roughly ordered by usefulness:
|
||||
* `-q` disables logging and can help a bunch, even when combined with `-lo` to redirect logs to file
|
||||
* `--http-only` or `--https-only` (unless you want to support both protocols) will reduce the delay before a new connection is established
|
||||
* `--hist` pointing to a fast location (ssd) will make directory listings and searches faster when `-e2d` or `-e2t` is set
|
||||
* `--no-hash` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
|
||||
* `--no-hash .` when indexing a network-disk if you don't care about the actual filehashes and only want the names/tags searchable
|
||||
* `-j` enables multiprocessing (actual multithreading) and can make copyparty perform better in cpu-intensive workloads, for example:
|
||||
* huge amount of short-lived connections
|
||||
* really heavy traffic (downloads/uploads)
|
||||
|
@@ -1,3 +1,11 @@
|
||||
# [`up2k.py`](up2k.py)
|
||||
* command-line up2k client [(webm)](https://ocv.me/stuff/u2cli.webm)
|
||||
* file uploads, file-search, autoresume of aborted/broken uploads
|
||||
* faster than browsers
|
||||
* early beta, if something breaks just restart it
|
||||
|
||||
|
||||
|
||||
# [`copyparty-fuse.py`](copyparty-fuse.py)
|
||||
* mount a copyparty server as a local filesystem (read-only)
|
||||
* **supports Windows!** -- expect `194 MiB/s` sequential read
|
||||
@@ -47,6 +55,7 @@ you could replace winfsp with [dokan](https://github.com/dokan-dev/dokany/releas
|
||||
* copyparty can Popen programs like these during file indexing to collect additional metadata
|
||||
|
||||
|
||||
|
||||
# [`dbtool.py`](dbtool.py)
|
||||
upgrade utility which can show db info and help transfer data between databases, for example when a new version of copyparty is incompatible with the old DB and automatically rebuilds the DB from scratch, but you have some really expensive `-mtp` parsers and want to copy over the tags from the old db
|
||||
|
||||
@@ -63,6 +72,7 @@ cd /mnt/nas/music/.hist
|
||||
```
|
||||
|
||||
|
||||
|
||||
# [`prisonparty.sh`](prisonparty.sh)
|
||||
* run copyparty in a chroot, preventing any accidental file access
|
||||
* creates bindmounts for /bin, /lib, and so on, see `sysdirs=`
|
||||
|
@@ -10,6 +10,7 @@ some of these rely on libraries which are not MIT-compatible
|
||||
|
||||
these do not have any problematic dependencies:
|
||||
|
||||
* [cksum.py](./cksum.py) computes various checksums
|
||||
* [exe.py](./exe.py) grabs metadata from .exe and .dll files (example for retrieving multiple tags with one parser)
|
||||
* [wget.py](./wget.py) lets you download files by POSTing URLs to copyparty
|
||||
|
||||
|
89
bin/mtag/cksum.py
Executable file
89
bin/mtag/cksum.py
Executable file
@@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import sys
|
||||
import json
|
||||
import zlib
|
||||
import struct
|
||||
import base64
|
||||
import hashlib
|
||||
|
||||
try:
|
||||
from copyparty.util import fsenc
|
||||
except:
|
||||
|
||||
def fsenc(p):
|
||||
return p
|
||||
|
||||
|
||||
"""
|
||||
calculates various checksums for uploads,
|
||||
usage: -mtp crc32,md5,sha1,sha256b=bin/mtag/cksum.py
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
config = "crc32 md5 md5b sha1 sha1b sha256 sha256b sha512/240 sha512b/240"
|
||||
# b suffix = base64 encoded
|
||||
# slash = truncate to n bits
|
||||
|
||||
known = {
|
||||
"md5": hashlib.md5,
|
||||
"sha1": hashlib.sha1,
|
||||
"sha256": hashlib.sha256,
|
||||
"sha512": hashlib.sha512,
|
||||
}
|
||||
config = config.split()
|
||||
hashers = {
|
||||
k: v()
|
||||
for k, v in known.items()
|
||||
if k in [x.split("/")[0].rstrip("b") for x in known]
|
||||
}
|
||||
crc32 = 0 if "crc32" in config else None
|
||||
|
||||
with open(fsenc(sys.argv[1]), "rb", 512 * 1024) as f:
|
||||
while True:
|
||||
buf = f.read(64 * 1024)
|
||||
if not buf:
|
||||
break
|
||||
|
||||
for x in hashers.values():
|
||||
x.update(buf)
|
||||
|
||||
if crc32 is not None:
|
||||
crc32 = zlib.crc32(buf, crc32)
|
||||
|
||||
ret = {}
|
||||
for s in config:
|
||||
alg = s.split("/")[0]
|
||||
b64 = alg.endswith("b")
|
||||
alg = alg.rstrip("b")
|
||||
if alg in hashers:
|
||||
v = hashers[alg].digest()
|
||||
elif alg == "crc32":
|
||||
v = crc32
|
||||
if v < 0:
|
||||
v &= 2 ** 32 - 1
|
||||
v = struct.pack(">L", v)
|
||||
else:
|
||||
raise Exception("what is {}".format(s))
|
||||
|
||||
if "/" in s:
|
||||
v = v[: int(int(s.split("/")[1]) / 8)]
|
||||
|
||||
if b64:
|
||||
v = base64.b64encode(v).decode("ascii").rstrip("=")
|
||||
else:
|
||||
try:
|
||||
v = v.hex()
|
||||
except:
|
||||
import binascii
|
||||
|
||||
v = binascii.hexlify(v)
|
||||
|
||||
ret[s] = v
|
||||
|
||||
print(json.dumps(ret, indent=4))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
723
bin/up2k.py
Executable file
723
bin/up2k.py
Executable file
@@ -0,0 +1,723 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import print_function, unicode_literals
|
||||
|
||||
"""
|
||||
up2k.py: upload to copyparty
|
||||
2021-10-04, v0.7, ed <irc.rizon.net>, MIT-Licensed
|
||||
https://github.com/9001/copyparty/blob/hovudstraum/bin/up2k.py
|
||||
|
||||
- dependencies: requests
|
||||
- supports python 2.6, 2.7, and 3.3 through 3.10
|
||||
|
||||
- almost zero error-handling
|
||||
- but if something breaks just try again and it'll autoresume
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import stat
|
||||
import math
|
||||
import time
|
||||
import atexit
|
||||
import signal
|
||||
import base64
|
||||
import hashlib
|
||||
import argparse
|
||||
import platform
|
||||
import threading
|
||||
import requests
|
||||
import datetime
|
||||
|
||||
|
||||
# from copyparty/__init__.py
|
||||
PY2 = sys.version_info[0] == 2
|
||||
if PY2:
|
||||
from Queue import Queue
|
||||
|
||||
sys.dont_write_bytecode = True
|
||||
bytes = str
|
||||
else:
|
||||
from queue import Queue
|
||||
|
||||
unicode = str
|
||||
|
||||
VT100 = platform.system() != "Windows"
|
||||
|
||||
|
||||
req_ses = requests.Session()
|
||||
|
||||
|
||||
class File(object):
|
||||
"""an up2k upload task; represents a single file"""
|
||||
|
||||
def __init__(self, top, rel, size, lmod):
|
||||
self.top = top # type: bytes
|
||||
self.rel = rel.replace(b"\\", b"/") # type: bytes
|
||||
self.size = size # type: int
|
||||
self.lmod = lmod # type: float
|
||||
|
||||
self.abs = os.path.join(top, rel) # type: bytes
|
||||
self.name = self.rel.split(b"/")[-1].decode("utf-8", "replace") # type: str
|
||||
|
||||
# set by get_hashlist
|
||||
self.cids = [] # type: list[tuple[str, int, int]] # [ hash, ofs, sz ]
|
||||
self.kchunks = {} # type: dict[str, tuple[int, int]] # hash: [ ofs, sz ]
|
||||
|
||||
# set by handshake
|
||||
self.ucids = [] # type: list[str] # chunks which need to be uploaded
|
||||
self.wark = None # type: str
|
||||
self.url = None # type: str
|
||||
|
||||
# set by upload
|
||||
self.up_b = 0 # type: int
|
||||
self.up_c = 0 # type: int
|
||||
|
||||
# m = "size({}) lmod({}) top({}) rel({}) abs({}) name({})\n"
|
||||
# eprint(m.format(self.size, self.lmod, self.top, self.rel, self.abs, self.name))
|
||||
|
||||
|
||||
class FileSlice(object):
|
||||
"""file-like object providing a fixed window into a file"""
|
||||
|
||||
def __init__(self, file, cid):
|
||||
# type: (File, str) -> FileSlice
|
||||
|
||||
self.car, self.len = file.kchunks[cid]
|
||||
self.cdr = self.car + self.len
|
||||
self.ofs = 0 # type: int
|
||||
self.f = open(file.abs, "rb", 512 * 1024)
|
||||
self.f.seek(self.car)
|
||||
|
||||
# https://stackoverflow.com/questions/4359495/what-is-exactly-a-file-like-object-in-python
|
||||
# IOBase, RawIOBase, BufferedIOBase
|
||||
funs = "close closed __enter__ __exit__ __iter__ isatty __next__ readable seekable writable"
|
||||
try:
|
||||
for fun in funs.split():
|
||||
setattr(self, fun, getattr(self.f, fun))
|
||||
except:
|
||||
pass # py27 probably
|
||||
|
||||
def tell(self):
|
||||
return self.ofs
|
||||
|
||||
def seek(self, ofs, wh=0):
|
||||
if wh == 1:
|
||||
ofs = self.ofs + ofs
|
||||
elif wh == 2:
|
||||
ofs = self.len + ofs # provided ofs is negative
|
||||
|
||||
if ofs < 0:
|
||||
ofs = 0
|
||||
elif ofs >= self.len:
|
||||
ofs = self.len - 1
|
||||
|
||||
self.ofs = ofs
|
||||
self.f.seek(self.car + ofs)
|
||||
|
||||
def read(self, sz):
|
||||
sz = min(sz, self.len - self.ofs)
|
||||
ret = self.f.read(sz)
|
||||
self.ofs += len(ret)
|
||||
return ret
|
||||
|
||||
|
||||
def eprint(*a, **ka):
|
||||
ka["file"] = sys.stderr
|
||||
ka["end"] = ""
|
||||
if not PY2:
|
||||
ka["flush"] = True
|
||||
|
||||
print(*a, **ka)
|
||||
if PY2:
|
||||
sys.stderr.flush()
|
||||
|
||||
|
||||
def termsize():
|
||||
import os
|
||||
|
||||
env = os.environ
|
||||
|
||||
def ioctl_GWINSZ(fd):
|
||||
try:
|
||||
import fcntl, termios, struct, os
|
||||
|
||||
cr = struct.unpack("hh", fcntl.ioctl(fd, termios.TIOCGWINSZ, "1234"))
|
||||
except:
|
||||
return
|
||||
return cr
|
||||
|
||||
cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)
|
||||
if not cr:
|
||||
try:
|
||||
fd = os.open(os.ctermid(), os.O_RDONLY)
|
||||
cr = ioctl_GWINSZ(fd)
|
||||
os.close(fd)
|
||||
except:
|
||||
pass
|
||||
if not cr:
|
||||
try:
|
||||
cr = (env["LINES"], env["COLUMNS"])
|
||||
except:
|
||||
cr = (25, 80)
|
||||
return int(cr[1]), int(cr[0])
|
||||
|
||||
|
||||
class CTermsize(object):
|
||||
def __init__(self):
|
||||
self.ev = False
|
||||
self.margin = None
|
||||
self.g = None
|
||||
self.w, self.h = termsize()
|
||||
|
||||
try:
|
||||
signal.signal(signal.SIGWINCH, self.ev_sig)
|
||||
except:
|
||||
return
|
||||
|
||||
thr = threading.Thread(target=self.worker)
|
||||
thr.daemon = True
|
||||
thr.start()
|
||||
|
||||
def worker(self):
|
||||
while True:
|
||||
time.sleep(0.5)
|
||||
if not self.ev:
|
||||
continue
|
||||
|
||||
self.ev = False
|
||||
self.w, self.h = termsize()
|
||||
|
||||
if self.margin is not None:
|
||||
self.scroll_region(self.margin)
|
||||
|
||||
def ev_sig(self, *a, **ka):
|
||||
self.ev = True
|
||||
|
||||
def scroll_region(self, margin):
|
||||
self.margin = margin
|
||||
if margin is None:
|
||||
self.g = None
|
||||
eprint("\033[s\033[r\033[u")
|
||||
else:
|
||||
self.g = 1 + self.h - margin
|
||||
m = "{0}\033[{1}A".format("\n" * margin, margin)
|
||||
eprint("{0}\033[s\033[1;{1}r\033[u".format(m, self.g - 1))
|
||||
|
||||
|
||||
ss = CTermsize()
|
||||
|
||||
|
||||
def statdir(top):
|
||||
"""non-recursive listing of directory contents, along with stat() info"""
|
||||
if hasattr(os, "scandir"):
|
||||
with os.scandir(top) as dh:
|
||||
for fh in dh:
|
||||
yield [os.path.join(top, fh.name), fh.stat()]
|
||||
else:
|
||||
for name in os.listdir(top):
|
||||
abspath = os.path.join(top, name)
|
||||
yield [abspath, os.stat(abspath)]
|
||||
|
||||
|
||||
def walkdir(top):
|
||||
"""recursive statdir"""
|
||||
for ap, inf in sorted(statdir(top)):
|
||||
if stat.S_ISDIR(inf.st_mode):
|
||||
for x in walkdir(ap):
|
||||
yield x
|
||||
else:
|
||||
yield ap, inf
|
||||
|
||||
|
||||
def walkdirs(tops):
|
||||
"""recursive statdir for a list of tops, yields [top, relpath, stat]"""
|
||||
for top in tops:
|
||||
if os.path.isdir(top):
|
||||
for ap, inf in walkdir(top):
|
||||
yield top, ap[len(top) + 1 :], inf
|
||||
else:
|
||||
sep = "{0}".format(os.sep).encode("ascii")
|
||||
d, n = top.rsplit(sep, 1)
|
||||
yield d, n, os.stat(top)
|
||||
|
||||
|
||||
# from copyparty/util.py
|
||||
def humansize(sz, terse=False):
|
||||
"""picks a sensible unit for the given extent"""
|
||||
for unit in ["B", "KiB", "MiB", "GiB", "TiB"]:
|
||||
if sz < 1024:
|
||||
break
|
||||
|
||||
sz /= 1024.0
|
||||
|
||||
ret = " ".join([str(sz)[:4].rstrip("."), unit])
|
||||
|
||||
if not terse:
|
||||
return ret
|
||||
|
||||
return ret.replace("iB", "").replace(" ", "")
|
||||
|
||||
|
||||
# from copyparty/up2k.py
|
||||
def up2k_chunksize(filesize):
|
||||
"""gives The correct chunksize for up2k hashing"""
|
||||
chunksize = 1024 * 1024
|
||||
stepsize = 512 * 1024
|
||||
while True:
|
||||
for mul in [1, 2]:
|
||||
nchunks = math.ceil(filesize * 1.0 / chunksize)
|
||||
if nchunks <= 256 or chunksize >= 32 * 1024 * 1024:
|
||||
return chunksize
|
||||
|
||||
chunksize += stepsize
|
||||
stepsize *= mul
|
||||
|
||||
|
||||
# mostly from copyparty/up2k.py
|
||||
def get_hashlist(file, pcb):
|
||||
# type: (File, any) -> None
|
||||
"""generates the up2k hashlist from file contents, inserts it into `file`"""
|
||||
|
||||
chunk_sz = up2k_chunksize(file.size)
|
||||
file_rem = file.size
|
||||
file_ofs = 0
|
||||
ret = []
|
||||
with open(file.abs, "rb", 512 * 1024) as f:
|
||||
while file_rem > 0:
|
||||
hashobj = hashlib.sha512()
|
||||
chunk_sz = chunk_rem = min(chunk_sz, file_rem)
|
||||
while chunk_rem > 0:
|
||||
buf = f.read(min(chunk_rem, 64 * 1024))
|
||||
if not buf:
|
||||
raise Exception("EOF at " + str(f.tell()))
|
||||
|
||||
hashobj.update(buf)
|
||||
chunk_rem -= len(buf)
|
||||
|
||||
digest = hashobj.digest()[:33]
|
||||
digest = base64.urlsafe_b64encode(digest).decode("utf-8")
|
||||
|
||||
ret.append([digest, file_ofs, chunk_sz])
|
||||
file_ofs += chunk_sz
|
||||
file_rem -= chunk_sz
|
||||
|
||||
if pcb:
|
||||
pcb(file, file_ofs)
|
||||
|
||||
file.cids = ret
|
||||
file.kchunks = {}
|
||||
for k, v1, v2 in ret:
|
||||
file.kchunks[k] = [v1, v2]
|
||||
|
||||
|
||||
def handshake(req_ses, url, file, pw, search):
|
||||
# type: (requests.Session, str, File, any, bool) -> List[str]
|
||||
"""
|
||||
performs a handshake with the server; reply is:
|
||||
if search, a list of search results
|
||||
otherwise, a list of chunks to upload
|
||||
"""
|
||||
|
||||
req = {
|
||||
"hash": [x[0] for x in file.cids],
|
||||
"name": file.name,
|
||||
"lmod": file.lmod,
|
||||
"size": file.size,
|
||||
}
|
||||
if search:
|
||||
req["srch"] = 1
|
||||
|
||||
headers = {"Content-Type": "text/plain"} # wtf ed
|
||||
if pw:
|
||||
headers["Cookie"] = "=".join(["cppwd", pw])
|
||||
|
||||
if file.url:
|
||||
url = file.url
|
||||
elif b"/" in file.rel:
|
||||
url += file.rel.rsplit(b"/", 1)[0].decode("utf-8", "replace")
|
||||
|
||||
while True:
|
||||
try:
|
||||
r = req_ses.post(url, headers=headers, json=req)
|
||||
break
|
||||
except:
|
||||
eprint("handshake failed, retry...\n")
|
||||
time.sleep(1)
|
||||
|
||||
try:
|
||||
r = r.json()
|
||||
except:
|
||||
raise Exception(r.text)
|
||||
|
||||
if search:
|
||||
return r["hits"]
|
||||
|
||||
try:
|
||||
pre, url = url.split("://")
|
||||
pre += "://"
|
||||
except:
|
||||
pre = ""
|
||||
|
||||
file.url = pre + url.split("/")[0] + r["purl"]
|
||||
file.name = r["name"]
|
||||
file.wark = r["wark"]
|
||||
|
||||
return r["hash"]
|
||||
|
||||
|
||||
def upload(req_ses, file, cid, pw):
|
||||
# type: (requests.Session, File, str, any) -> None
|
||||
"""upload one specific chunk, `cid` (a chunk-hash)"""
|
||||
|
||||
headers = {
|
||||
"X-Up2k-Hash": cid,
|
||||
"X-Up2k-Wark": file.wark,
|
||||
"Content-Type": "application/octet-stream",
|
||||
}
|
||||
if pw:
|
||||
headers["Cookie"] = "=".join(["cppwd", pw])
|
||||
|
||||
f = FileSlice(file, cid)
|
||||
try:
|
||||
r = req_ses.post(file.url, headers=headers, data=f)
|
||||
if not r:
|
||||
raise Exception(repr(r))
|
||||
|
||||
_ = r.content
|
||||
finally:
|
||||
f.f.close()
|
||||
|
||||
|
||||
class Daemon(threading.Thread):
|
||||
def __init__(self, *a, **ka):
|
||||
threading.Thread.__init__(self, *a, **ka)
|
||||
self.daemon = True
|
||||
|
||||
|
||||
class Ctl(object):
|
||||
"""
|
||||
this will be the coordinator which runs everything in parallel
|
||||
(hashing, handshakes, uploads) but right now it's p dumb
|
||||
"""
|
||||
|
||||
def __init__(self, ar):
|
||||
self.ar = ar
|
||||
ar.files = [
|
||||
os.path.abspath(os.path.realpath(x.encode("utf-8"))) for x in ar.files
|
||||
]
|
||||
ar.url = ar.url.rstrip("/") + "/"
|
||||
if "://" not in ar.url:
|
||||
ar.url = "http://" + ar.url
|
||||
|
||||
eprint("\nscanning {0} locations\n".format(len(ar.files)))
|
||||
|
||||
nfiles = 0
|
||||
nbytes = 0
|
||||
for _, _, inf in walkdirs(ar.files):
|
||||
nfiles += 1
|
||||
nbytes += inf.st_size
|
||||
|
||||
eprint("found {0} files, {1}\n\n".format(nfiles, humansize(nbytes)))
|
||||
self.nfiles = nfiles
|
||||
self.nbytes = nbytes
|
||||
|
||||
if ar.td:
|
||||
req_ses.verify = False
|
||||
if ar.te:
|
||||
req_ses.verify = ar.te
|
||||
|
||||
self.filegen = walkdirs(ar.files)
|
||||
if ar.safe:
|
||||
self.safe()
|
||||
else:
|
||||
self.fancy()
|
||||
|
||||
def safe(self):
|
||||
"""minimal basic slow boring fallback codepath"""
|
||||
search = self.ar.s
|
||||
for nf, (top, rel, inf) in enumerate(self.filegen):
|
||||
file = File(top, rel, inf.st_size, inf.st_mtime)
|
||||
upath = file.abs.decode("utf-8", "replace")
|
||||
|
||||
print("{0} {1}\n hash...".format(self.nfiles - nf, upath))
|
||||
get_hashlist(file, None)
|
||||
|
||||
while True:
|
||||
print(" hs...")
|
||||
hs = handshake(req_ses, self.ar.url, file, self.ar.a, search)
|
||||
if search:
|
||||
if hs:
|
||||
for hit in hs:
|
||||
print(" found: {0}{1}".format(self.ar.url, hit["rp"]))
|
||||
else:
|
||||
print(" NOT found")
|
||||
break
|
||||
|
||||
file.ucids = hs
|
||||
if not hs:
|
||||
break
|
||||
|
||||
print("{0} {1}".format(self.nfiles - nf, upath))
|
||||
ncs = len(hs)
|
||||
for nc, cid in enumerate(hs):
|
||||
print(" {0} up {1}".format(ncs - nc, cid))
|
||||
upload(req_ses, file, cid, self.ar.a)
|
||||
|
||||
print(" ok!")
|
||||
|
||||
def fancy(self):
|
||||
self.hash_f = 0
|
||||
self.hash_c = 0
|
||||
self.hash_b = 0
|
||||
self.up_f = 0
|
||||
self.up_c = 0
|
||||
self.up_b = 0
|
||||
self.up_br = 0
|
||||
self.hasher_busy = 1
|
||||
self.handshaker_busy = 0
|
||||
self.uploader_busy = 0
|
||||
|
||||
self.t0 = time.time()
|
||||
self.t0_up = None
|
||||
self.spd = None
|
||||
|
||||
self.mutex = threading.Lock()
|
||||
self.q_handshake = Queue() # type: Queue[File]
|
||||
self.q_recheck = Queue() # type: Queue[File] # partial upload exists [...]
|
||||
self.q_upload = Queue() # type: Queue[tuple[File, str]]
|
||||
|
||||
self.st_hash = [None, "(idle, starting...)"] # type: tuple[File, int]
|
||||
self.st_up = [None, "(idle, starting...)"] # type: tuple[File, int]
|
||||
if VT100:
|
||||
atexit.register(self.cleanup_vt100)
|
||||
ss.scroll_region(3)
|
||||
|
||||
Daemon(target=self.hasher).start()
|
||||
for _ in range(self.ar.j):
|
||||
Daemon(target=self.handshaker).start()
|
||||
Daemon(target=self.uploader).start()
|
||||
|
||||
idles = 0
|
||||
while idles < 3:
|
||||
time.sleep(0.07)
|
||||
with self.mutex:
|
||||
if (
|
||||
self.q_handshake.empty()
|
||||
and self.q_upload.empty()
|
||||
and not self.hasher_busy
|
||||
and not self.handshaker_busy
|
||||
and not self.uploader_busy
|
||||
):
|
||||
idles += 1
|
||||
else:
|
||||
idles = 0
|
||||
|
||||
if VT100:
|
||||
maxlen = ss.w - len(str(self.nfiles)) - 14
|
||||
txt = "\033[s\033[{0}H".format(ss.g)
|
||||
for y, k, st, f in [
|
||||
[0, "hash", self.st_hash, self.hash_f],
|
||||
[1, "send", self.st_up, self.up_f],
|
||||
]:
|
||||
txt += "\033[{0}H{1}:".format(ss.g + y, k)
|
||||
file, arg = st
|
||||
if not file:
|
||||
txt += " {0}\033[K".format(arg)
|
||||
else:
|
||||
if y:
|
||||
p = 100 * file.up_b / file.size
|
||||
else:
|
||||
p = 100 * arg / file.size
|
||||
|
||||
name = file.abs.decode("utf-8", "replace")[-maxlen:]
|
||||
if "/" in name:
|
||||
name = "\033[36m{0}\033[0m/{1}".format(*name.rsplit("/", 1))
|
||||
|
||||
m = "{0:6.1f}% {1} {2}\033[K"
|
||||
txt += m.format(p, self.nfiles - f, name)
|
||||
|
||||
txt += "\033[{0}H ".format(ss.g + 2)
|
||||
else:
|
||||
txt = " "
|
||||
|
||||
if not self.up_br:
|
||||
spd = self.hash_b / (time.time() - self.t0)
|
||||
eta = (self.nbytes - self.hash_b) / (spd + 1)
|
||||
else:
|
||||
spd = self.up_br / (time.time() - self.t0_up)
|
||||
spd = self.spd = (self.spd or spd) * 0.9 + spd * 0.1
|
||||
eta = (self.nbytes - self.up_b) / (spd + 1)
|
||||
|
||||
spd = humansize(spd)
|
||||
eta = str(datetime.timedelta(seconds=int(eta)))
|
||||
left = humansize(self.nbytes - self.up_b)
|
||||
tail = "\033[K\033[u" if VT100 else "\r"
|
||||
|
||||
m = "eta: {0} @ {1}/s, {2} left".format(eta, spd, left)
|
||||
eprint(txt + "\033]0;{0}\033\\\r{1}{2}".format(m, m, tail))
|
||||
|
||||
def cleanup_vt100(self):
|
||||
ss.scroll_region(None)
|
||||
eprint("\033[J\033]0;\033\\")
|
||||
|
||||
def cb_hasher(self, file, ofs):
|
||||
self.st_hash = [file, ofs]
|
||||
|
||||
def hasher(self):
|
||||
for top, rel, inf in self.filegen:
|
||||
file = File(top, rel, inf.st_size, inf.st_mtime)
|
||||
while True:
|
||||
with self.mutex:
|
||||
if (
|
||||
self.hash_b - self.up_b < 1024 * 1024 * 128
|
||||
and self.hash_c - self.up_c < 64
|
||||
and (
|
||||
not self.ar.nh
|
||||
or (
|
||||
self.q_upload.empty()
|
||||
and self.q_handshake.empty()
|
||||
and not self.uploader_busy
|
||||
)
|
||||
)
|
||||
):
|
||||
break
|
||||
|
||||
time.sleep(0.05)
|
||||
|
||||
get_hashlist(file, self.cb_hasher)
|
||||
with self.mutex:
|
||||
self.hash_f += 1
|
||||
self.hash_c += len(file.cids)
|
||||
self.hash_b += file.size
|
||||
|
||||
self.q_handshake.put(file)
|
||||
|
||||
self.hasher_busy = 0
|
||||
self.st_hash = [None, "(finished)"]
|
||||
|
||||
def handshaker(self):
|
||||
search = self.ar.s
|
||||
q = self.q_handshake
|
||||
while True:
|
||||
file = q.get()
|
||||
if not file:
|
||||
if q == self.q_handshake:
|
||||
q = self.q_recheck
|
||||
q.put(None)
|
||||
continue
|
||||
|
||||
self.q_upload.put(None)
|
||||
break
|
||||
|
||||
with self.mutex:
|
||||
self.handshaker_busy += 1
|
||||
|
||||
upath = file.abs.decode("utf-8", "replace")
|
||||
|
||||
try:
|
||||
hs = handshake(req_ses, self.ar.url, file, self.ar.a, search)
|
||||
except Exception as ex:
|
||||
if q == self.q_handshake and "<pre>partial upload exists" in str(ex):
|
||||
self.q_recheck.put(file)
|
||||
hs = []
|
||||
else:
|
||||
raise
|
||||
|
||||
if search:
|
||||
if hs:
|
||||
for hit in hs:
|
||||
m = "found: {0}\n {1}{2}\n"
|
||||
print(m.format(upath, self.ar.url, hit["rp"]), end="")
|
||||
else:
|
||||
print("NOT found: {0}\n".format(upath), end="")
|
||||
|
||||
with self.mutex:
|
||||
self.up_f += 1
|
||||
self.up_c += len(file.cids)
|
||||
self.up_b += file.size
|
||||
self.handshaker_busy -= 1
|
||||
|
||||
continue
|
||||
|
||||
with self.mutex:
|
||||
if not hs:
|
||||
# all chunks done
|
||||
self.up_f += 1
|
||||
self.up_c += len(file.cids) - file.up_c
|
||||
self.up_b += file.size - file.up_b
|
||||
|
||||
if hs and file.up_c:
|
||||
# some chunks failed
|
||||
self.up_c -= len(hs)
|
||||
file.up_c -= len(hs)
|
||||
for cid in hs:
|
||||
sz = file.kchunks[cid][1]
|
||||
self.up_b -= sz
|
||||
file.up_b -= sz
|
||||
|
||||
file.ucids = hs
|
||||
self.handshaker_busy -= 1
|
||||
|
||||
if not hs:
|
||||
print("uploaded {0}".format(upath))
|
||||
for cid in hs:
|
||||
self.q_upload.put([file, cid])
|
||||
|
||||
def uploader(self):
|
||||
while True:
|
||||
task = self.q_upload.get()
|
||||
if not task:
|
||||
self.st_up = [None, "(finished)"]
|
||||
break
|
||||
|
||||
with self.mutex:
|
||||
self.uploader_busy += 1
|
||||
self.t0_up = self.t0_up or time.time()
|
||||
|
||||
file, cid = task
|
||||
try:
|
||||
upload(req_ses, file, cid, self.ar.a)
|
||||
except:
|
||||
eprint("upload failed, retry...\n")
|
||||
pass # handshake will fix it
|
||||
|
||||
with self.mutex:
|
||||
sz = file.kchunks[cid][1]
|
||||
file.ucids = [x for x in file.ucids if x != cid]
|
||||
if not file.ucids:
|
||||
self.q_handshake.put(file)
|
||||
|
||||
self.st_up = [file, cid]
|
||||
file.up_b += sz
|
||||
self.up_b += sz
|
||||
self.up_br += sz
|
||||
file.up_c += 1
|
||||
self.up_c += 1
|
||||
self.uploader_busy -= 1
|
||||
|
||||
|
||||
def main():
|
||||
time.strptime("19970815", "%Y%m%d") # python#7980
|
||||
if not VT100:
|
||||
os.system("rem") # enables colors
|
||||
|
||||
# fmt: off
|
||||
ap = app = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
|
||||
ap.add_argument("url", type=unicode, help="server url, including destination folder")
|
||||
ap.add_argument("files", type=unicode, nargs="+", help="files and/or folders to process")
|
||||
ap.add_argument("-a", metavar="PASSWORD", help="password")
|
||||
ap.add_argument("-s", action="store_true", help="file-search (disables upload)")
|
||||
ap = app.add_argument_group("performance tweaks")
|
||||
ap.add_argument("-j", type=int, metavar="THREADS", default=4, help="parallel connections")
|
||||
ap.add_argument("-nh", action="store_true", help="disable hashing while uploading")
|
||||
ap.add_argument("--safe", action="store_true", help="use simple fallback approach")
|
||||
ap = app.add_argument_group("tls")
|
||||
ap.add_argument("-te", metavar="PEM_FILE", help="certificate to expect/verify")
|
||||
ap.add_argument("-td", action="store_true", help="disable certificate check")
|
||||
# fmt: on
|
||||
|
||||
Ctl(app.parse_args())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
0
bin/up2k.sh
Executable file → Normal file
0
bin/up2k.sh
Executable file → Normal file
@@ -276,7 +276,8 @@ def run_argparse(argv, formatter):
|
||||
\033[36me2d\033[35m sets -e2d (all -e2* args can be set using ce2* volflags)
|
||||
\033[36md2t\033[35m disables metadata collection, overrides -e2t*
|
||||
\033[36md2d\033[35m disables all database stuff, overrides -e2*
|
||||
\033[36mdhash\033[35m disables file hashing on initial scans, also ehash
|
||||
\033[36mnohash=\\.iso$\033[35m skips hashing file contents if path matches *.iso
|
||||
\033[36mnoidx=\\.iso$\033[35m fully ignores the contents at paths matching *.iso
|
||||
\033[36mhist=/tmp/cdb\033[35m puts thumbnails and indexes at that location
|
||||
\033[36mscan=60\033[35m scan for new files every 60sec, same as --re-maxage
|
||||
|
||||
@@ -378,6 +379,7 @@ def run_argparse(argv, formatter):
|
||||
ap2.add_argument("--no-dot-ren", action="store_true", help="disallow renaming dotfiles; makes it impossible to make something a dotfile")
|
||||
ap2.add_argument("--no-logues", action="store_true", help="disable rendering .prologue/.epilogue.html into directory listings")
|
||||
ap2.add_argument("--no-readme", action="store_true", help="disable rendering readme.md into directory listings")
|
||||
ap2.add_argument("--vague-403", action="store_true", help="send 404 instead of 403 (security through ambiguity, very enterprise)")
|
||||
|
||||
ap2 = ap.add_argument_group('logging options')
|
||||
ap2.add_argument("-q", action="store_true", help="quiet")
|
||||
@@ -411,7 +413,8 @@ def run_argparse(argv, formatter):
|
||||
ap2.add_argument("-e2ds", action="store_true", help="enable up2k db-scanner, sets -e2d")
|
||||
ap2.add_argument("-e2dsa", action="store_true", help="scan all folders (for search), sets -e2ds")
|
||||
ap2.add_argument("--hist", metavar="PATH", type=u, help="where to store volume data (db, thumbs)")
|
||||
ap2.add_argument("--no-hash", action="store_true", help="disable hashing during e2ds folder scans")
|
||||
ap2.add_argument("--no-hash", metavar="PTN", type=u, help="regex: disable hashing of matching paths during e2ds folder scans")
|
||||
ap2.add_argument("--no-idx", metavar="PTN", type=u, help="regex: disable indexing of matching paths during e2ds folder scans")
|
||||
ap2.add_argument("--re-int", metavar="SEC", type=int, default=30, help="disk rescan check interval")
|
||||
ap2.add_argument("--re-maxage", metavar="SEC", type=int, default=0, help="disk rescan volume interval, 0=off, can be set per-volume with the 'scan' volflag")
|
||||
ap2.add_argument("--srch-time", metavar="SEC", type=int, default=30, help="search deadline")
|
||||
|
@@ -1,8 +1,8 @@
|
||||
# coding: utf-8
|
||||
|
||||
VERSION = (1, 0, 7)
|
||||
VERSION = (1, 0, 10)
|
||||
CODENAME = "sufficient"
|
||||
BUILD_DT = (2021, 9, 26)
|
||||
BUILD_DT = (2021, 10, 12)
|
||||
|
||||
S_VERSION = ".".join(map(str, VERSION))
|
||||
S_BUILD_DT = "{0:04d}-{1:02d}-{2:02d}".format(*BUILD_DT)
|
||||
|
@@ -356,7 +356,7 @@ class VFS(object):
|
||||
if not dbv:
|
||||
return self, vrem
|
||||
|
||||
vrem = [self.vpath[len(dbv.vpath) + 1 :], vrem]
|
||||
vrem = [self.vpath[len(dbv.vpath) :].lstrip("/"), vrem]
|
||||
vrem = "/".join([x for x in vrem if x])
|
||||
return dbv, vrem
|
||||
|
||||
@@ -865,9 +865,14 @@ class AuthSrv(object):
|
||||
if self.args.e2d or "e2ds" in vol.flags:
|
||||
vol.flags["e2d"] = True
|
||||
|
||||
if self.args.no_hash:
|
||||
if "ehash" not in vol.flags:
|
||||
vol.flags["dhash"] = True
|
||||
for ga, vf in [["no_hash", "nohash"], ["no_idx", "noidx"]]:
|
||||
if vf in vol.flags:
|
||||
ptn = vol.flags.pop(vf)
|
||||
else:
|
||||
ptn = getattr(self.args, ga)
|
||||
|
||||
if ptn:
|
||||
vol.flags[vf] = re.compile(ptn)
|
||||
|
||||
for k in ["e2t", "e2ts", "e2tsr"]:
|
||||
if getattr(self.args, k):
|
||||
@@ -880,6 +885,10 @@ class AuthSrv(object):
|
||||
# default tag cfgs if unset
|
||||
if "mte" not in vol.flags:
|
||||
vol.flags["mte"] = self.args.mte
|
||||
elif vol.flags["mte"].startswith("+"):
|
||||
vol.flags["mte"] = ",".join(
|
||||
x for x in [self.args.mte, vol.flags["mte"][1:]] if x
|
||||
)
|
||||
if "mth" not in vol.flags:
|
||||
vol.flags["mth"] = self.args.mth
|
||||
|
||||
|
@@ -25,14 +25,14 @@ def lstat(p):
|
||||
def makedirs(name, mode=0o755, exist_ok=True):
|
||||
bname = fsenc(name)
|
||||
try:
|
||||
os.makedirs(bname, mode=mode)
|
||||
os.makedirs(bname, mode)
|
||||
except:
|
||||
if not exist_ok or not os.path.isdir(bname):
|
||||
raise
|
||||
|
||||
|
||||
def mkdir(p, mode=0o755):
|
||||
return os.mkdir(fsenc(p), mode=mode)
|
||||
return os.mkdir(fsenc(p), mode)
|
||||
|
||||
|
||||
def rename(src, dst):
|
||||
|
@@ -10,7 +10,6 @@ import json
|
||||
import base64
|
||||
import string
|
||||
import socket
|
||||
import ctypes
|
||||
from datetime import datetime
|
||||
from operator import itemgetter
|
||||
import calendar
|
||||
@@ -20,6 +19,11 @@ try:
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
import ctypes
|
||||
except:
|
||||
pass
|
||||
|
||||
from .__init__ import E, PY2, WINDOWS, ANYWIN, unicode
|
||||
from .util import * # noqa # pylint: disable=unused-wildcard-import
|
||||
from .bos import bos
|
||||
@@ -94,6 +98,7 @@ class HttpCli(object):
|
||||
def run(self):
|
||||
"""returns true if connection can be reused"""
|
||||
self.keepalive = False
|
||||
self.is_https = False
|
||||
self.headers = {}
|
||||
self.hint = None
|
||||
try:
|
||||
@@ -131,6 +136,7 @@ class HttpCli(object):
|
||||
|
||||
v = self.headers.get("connection", "").lower()
|
||||
self.keepalive = not v.startswith("close") and self.http_ver != "HTTP/1.0"
|
||||
self.is_https = (self.headers.get("x-forwarded-proto", "").lower() == "https" or self.tls)
|
||||
|
||||
n = self.args.rproxy
|
||||
if n:
|
||||
@@ -389,7 +395,7 @@ class HttpCli(object):
|
||||
if not self.can_read and not self.can_write and not self.can_get:
|
||||
if self.vpath:
|
||||
self.log("inaccessible: [{}]".format(self.vpath))
|
||||
return self.tx_404()
|
||||
return self.tx_404(True)
|
||||
|
||||
self.uparam["h"] = False
|
||||
|
||||
@@ -1129,7 +1135,7 @@ class HttpCli(object):
|
||||
# using SHA-512/224, optionally SHA-512/256 = :64
|
||||
jpart = {
|
||||
"url": "{}://{}/{}".format(
|
||||
"https" if self.tls else "http",
|
||||
"https" if self.is_https else "http",
|
||||
self.headers.get("host", "copyparty"),
|
||||
vpath + vsuf,
|
||||
),
|
||||
@@ -1565,7 +1571,7 @@ class HttpCli(object):
|
||||
|
||||
if not self.can_write:
|
||||
if "edit" in self.uparam or "edit2" in self.uparam:
|
||||
return self.tx_404()
|
||||
return self.tx_404(True)
|
||||
|
||||
tpl = "mde" if "edit2" in self.uparam else "md"
|
||||
html_path = os.path.join(E.mod, "web", "{}.html".format(tpl))
|
||||
@@ -1667,8 +1673,14 @@ class HttpCli(object):
|
||||
self.reply(html.encode("utf-8"))
|
||||
return True
|
||||
|
||||
def tx_404(self):
|
||||
m = '<h1>404 not found ┐( ´ -`)┌</h1><p>or maybe you don\'t have access -- try logging in or <a href="/?h">go home</a></p>'
|
||||
def tx_404(self, is_403=False):
|
||||
if self.args.vague_403:
|
||||
m = '<h1>404 not found ┐( ´ -`)┌</h1><p>or maybe you don\'t have access -- try logging in or <a href="/?h">go home</a></p>'
|
||||
elif is_403:
|
||||
m = '<h1>403 forbiddena ~┻━┻</h1><p>you\'ll have to log in or <a href="/?h">go home</a></p>'
|
||||
else:
|
||||
m = '<h1>404 not found ┐( ´ -`)┌</h1><p><a href="/?h">go home</a></p>'
|
||||
|
||||
html = self.j2("splash", this=self, qvpath=quotep(self.vpath), msg=m)
|
||||
self.reply(html.encode("utf-8"), status=404)
|
||||
return True
|
||||
@@ -1895,7 +1907,7 @@ class HttpCli(object):
|
||||
return self.tx_file(abspath)
|
||||
|
||||
elif is_dir and not self.can_read and not self.can_write:
|
||||
return self.tx_404()
|
||||
return self.tx_404(True)
|
||||
|
||||
srv_info = []
|
||||
|
||||
@@ -1909,11 +1921,14 @@ class HttpCli(object):
|
||||
# some fuses misbehave
|
||||
if not self.args.nid:
|
||||
if WINDOWS:
|
||||
bfree = ctypes.c_ulonglong(0)
|
||||
ctypes.windll.kernel32.GetDiskFreeSpaceExW(
|
||||
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
|
||||
)
|
||||
srv_info.append(humansize(bfree.value) + " free")
|
||||
try:
|
||||
bfree = ctypes.c_ulonglong(0)
|
||||
ctypes.windll.kernel32.GetDiskFreeSpaceExW(
|
||||
ctypes.c_wchar_p(abspath), None, None, ctypes.pointer(bfree)
|
||||
)
|
||||
srv_info.append(humansize(bfree.value) + " free")
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
sv = os.statvfs(fsenc(abspath))
|
||||
free = humansize(sv.f_frsize * sv.f_bfree, True)
|
||||
@@ -2000,7 +2015,7 @@ class HttpCli(object):
|
||||
return True
|
||||
|
||||
if not stat.S_ISDIR(st.st_mode):
|
||||
return self.tx_404()
|
||||
return self.tx_404(True)
|
||||
|
||||
if "zip" in self.uparam or "tar" in self.uparam:
|
||||
raise Pebkac(403)
|
||||
|
@@ -471,7 +471,10 @@ class MTag(object):
|
||||
ret = {}
|
||||
for tagname, mp in parsers.items():
|
||||
try:
|
||||
cmd = [sys.executable, mp.bin, abspath]
|
||||
cmd = [mp.bin, abspath]
|
||||
if mp.bin.endswith(".py"):
|
||||
cmd = [sys.executable] + cmd
|
||||
|
||||
args = {"env": env, "timeout": mp.timeout}
|
||||
|
||||
if WINDOWS:
|
||||
|
@@ -6,6 +6,7 @@ import os
|
||||
import time
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from operator import itemgetter
|
||||
|
||||
from .__init__ import ANYWIN, unicode
|
||||
from .util import absreal, s3dec, Pebkac, min_ex, gen_filekey, quotep
|
||||
@@ -292,9 +293,13 @@ class U2idx(object):
|
||||
# undupe hits from multiple metadata keys
|
||||
if len(ret) > 1:
|
||||
ret = [ret[0]] + [
|
||||
y for x, y in zip(ret[:-1], ret[1:]) if x["rp"] != y["rp"]
|
||||
y
|
||||
for x, y in zip(ret[:-1], ret[1:])
|
||||
if x["rp"].split("?")[0] != y["rp"].split("?")[0]
|
||||
]
|
||||
|
||||
ret.sort(key=itemgetter("rp"))
|
||||
|
||||
return ret, list(taglist.keys())
|
||||
|
||||
def terminator(self, identifier, done_flag):
|
||||
|
@@ -29,6 +29,8 @@ from .util import (
|
||||
atomic_move,
|
||||
quotep,
|
||||
vsplit,
|
||||
w8b64enc,
|
||||
w8b64dec,
|
||||
s3enc,
|
||||
s3dec,
|
||||
rmdirs,
|
||||
@@ -464,7 +466,8 @@ class Up2k(object):
|
||||
def _build_file_index(self, vol, all_vols):
|
||||
do_vac = False
|
||||
top = vol.realpath
|
||||
nohash = "dhash" in vol.flags
|
||||
rei = vol.flags.get("noidx")
|
||||
reh = vol.flags.get("nohash")
|
||||
with self.mutex:
|
||||
cur, _ = self.register_vpath(top, vol.flags)
|
||||
|
||||
@@ -479,37 +482,54 @@ class Up2k(object):
|
||||
if WINDOWS:
|
||||
excl = [x.replace("/", "\\") for x in excl]
|
||||
|
||||
n_add = self._build_dir(dbw, top, set(excl), top, nohash, [])
|
||||
n_rm = self._drop_lost(dbw[0], top)
|
||||
n_add = n_rm = 0
|
||||
try:
|
||||
n_add = self._build_dir(dbw, top, set(excl), top, rei, reh, [])
|
||||
n_rm = self._drop_lost(dbw[0], top)
|
||||
except:
|
||||
m = "failed to index volume [{}]:\n{}"
|
||||
self.log(m.format(top, min_ex()), c=1)
|
||||
|
||||
if dbw[1]:
|
||||
self.log("commit {} new files".format(dbw[1]))
|
||||
dbw[0].connection.commit()
|
||||
|
||||
dbw[0].connection.commit()
|
||||
|
||||
return True, n_add or n_rm or do_vac
|
||||
|
||||
def _build_dir(self, dbw, top, excl, cdir, nohash, seen):
|
||||
def _build_dir(self, dbw, top, excl, cdir, rei, reh, seen):
|
||||
rcdir = absreal(cdir) # a bit expensive but worth
|
||||
if rcdir in seen:
|
||||
m = "bailing from symlink loop,\n prev: {}\n curr: {}\n from: {}"
|
||||
self.log(m.format(seen[-1], rcdir, cdir), 3)
|
||||
return 0
|
||||
|
||||
seen = seen + [cdir]
|
||||
seen = seen + [rcdir]
|
||||
self.pp.msg = "a{} {}".format(self.pp.n, cdir)
|
||||
histpath = self.asrv.vfs.histtab[top]
|
||||
ret = 0
|
||||
seen_files = {}
|
||||
g = statdir(self.log_func, not self.args.no_scandir, False, cdir)
|
||||
for iname, inf in sorted(g):
|
||||
abspath = os.path.join(cdir, iname)
|
||||
if rei and rei.search(abspath):
|
||||
continue
|
||||
|
||||
nohash = reh.search(abspath) if reh else False
|
||||
lmod = int(inf.st_mtime)
|
||||
sz = inf.st_size
|
||||
if stat.S_ISDIR(inf.st_mode):
|
||||
if abspath in excl or abspath == histpath:
|
||||
continue
|
||||
# self.log(" dir: {}".format(abspath))
|
||||
ret += self._build_dir(dbw, top, excl, abspath, nohash, seen)
|
||||
try:
|
||||
ret += self._build_dir(dbw, top, excl, abspath, rei, reh, seen)
|
||||
except:
|
||||
m = "failed to index subdir [{}]:\n{}"
|
||||
self.log(m.format(abspath, min_ex()), c=1)
|
||||
else:
|
||||
# self.log("file: {}".format(abspath))
|
||||
seen_files[iname] = 1
|
||||
rp = abspath[len(top) + 1 :]
|
||||
if WINDOWS:
|
||||
rp = rp.replace("\\", "/").strip("/")
|
||||
@@ -568,34 +588,65 @@ class Up2k(object):
|
||||
dbw[0].connection.commit()
|
||||
dbw[1] = 0
|
||||
dbw[2] = time.time()
|
||||
|
||||
# drop missing files
|
||||
rd = cdir[len(top) + 1 :].strip("/")
|
||||
if WINDOWS:
|
||||
rd = rd.replace("\\", "/").strip("/")
|
||||
|
||||
q = "select fn from up where rd = ?"
|
||||
try:
|
||||
c = dbw[0].execute(q, (rd,))
|
||||
except:
|
||||
c = dbw[0].execute(q, ("//" + w8b64enc(rd),))
|
||||
|
||||
hits = [w8b64dec(x[2:]) if x.startswith("//") else x for (x,) in c]
|
||||
rm_files = [x for x in hits if x not in seen_files]
|
||||
n_rm = len(rm_files)
|
||||
for fn in rm_files:
|
||||
self.db_rm(dbw[0], rd, fn)
|
||||
|
||||
if n_rm:
|
||||
self.log("forgot {} deleted files".format(n_rm))
|
||||
|
||||
return ret
|
||||
|
||||
def _drop_lost(self, cur, top):
|
||||
rm = []
|
||||
n_rm = 0
|
||||
nchecked = 0
|
||||
nfiles = next(cur.execute("select count(w) from up"))[0]
|
||||
c = cur.execute("select rd, fn from up")
|
||||
for drd, dfn in c:
|
||||
# `_build_dir` did all the files, now do dirs
|
||||
ndirs = next(cur.execute("select count(distinct rd) from up"))[0]
|
||||
c = cur.execute("select distinct rd from up order by rd desc")
|
||||
for (drd,) in c:
|
||||
nchecked += 1
|
||||
if drd.startswith("//") or dfn.startswith("//"):
|
||||
drd, dfn = s3dec(drd, dfn)
|
||||
if drd.startswith("//"):
|
||||
rd = w8b64dec(drd[2:])
|
||||
else:
|
||||
rd = drd
|
||||
|
||||
abspath = os.path.join(top, drd, dfn)
|
||||
# almost zero overhead dw
|
||||
self.pp.msg = "b{} {}".format(nfiles - nchecked, abspath)
|
||||
abspath = os.path.join(top, rd)
|
||||
self.pp.msg = "b{} {}".format(ndirs - nchecked, abspath)
|
||||
try:
|
||||
if not bos.path.exists(abspath):
|
||||
rm.append([drd, dfn])
|
||||
except Exception as ex:
|
||||
self.log("stat-rm: {} @ [{}]".format(repr(ex), abspath))
|
||||
if os.path.isdir(abspath):
|
||||
continue
|
||||
except:
|
||||
pass
|
||||
|
||||
if rm:
|
||||
self.log("forgetting {} deleted files".format(len(rm)))
|
||||
for rd, fn in rm:
|
||||
# self.log("{} / {}".format(rd, fn))
|
||||
self.db_rm(cur, rd, fn)
|
||||
rm.append(drd)
|
||||
|
||||
return len(rm)
|
||||
if not rm:
|
||||
return 0
|
||||
|
||||
q = "select count(w) from up where rd = ?"
|
||||
for rd in rm:
|
||||
n_rm += next(cur.execute(q, (rd,)))[0]
|
||||
|
||||
self.log("forgetting {} deleted dirs, {} files".format(len(rm), n_rm))
|
||||
for rd in rm:
|
||||
cur.execute("delete from up where rd = ?", (rd,))
|
||||
|
||||
return n_rm
|
||||
|
||||
def _build_tags_index(self, vol):
|
||||
ptop = vol.realpath
|
||||
@@ -1362,53 +1413,56 @@ class Up2k(object):
|
||||
# del self.registry[ptop][wark]
|
||||
return ret, dst
|
||||
|
||||
# windows cant rename open files
|
||||
if not ANYWIN or src == dst:
|
||||
self.finish_upload(ptop, wark)
|
||||
# windows cant rename open files
|
||||
if not ANYWIN or src == dst:
|
||||
self._finish_upload(ptop, wark)
|
||||
|
||||
return ret, dst
|
||||
|
||||
def finish_upload(self, ptop, wark):
|
||||
with self.mutex:
|
||||
try:
|
||||
job = self.registry[ptop][wark]
|
||||
pdir = os.path.join(job["ptop"], job["prel"])
|
||||
src = os.path.join(pdir, job["tnam"])
|
||||
dst = os.path.join(pdir, job["name"])
|
||||
except Exception as ex:
|
||||
return "finish_upload, wark, " + repr(ex)
|
||||
self._finish_upload(ptop, wark)
|
||||
|
||||
# self.log("--- " + wark + " " + dst + " finish_upload atomic " + dst, 4)
|
||||
atomic_move(src, dst)
|
||||
def _finish_upload(self, ptop, wark):
|
||||
try:
|
||||
job = self.registry[ptop][wark]
|
||||
pdir = os.path.join(job["ptop"], job["prel"])
|
||||
src = os.path.join(pdir, job["tnam"])
|
||||
dst = os.path.join(pdir, job["name"])
|
||||
except Exception as ex:
|
||||
return "finish_upload, wark, " + repr(ex)
|
||||
|
||||
if ANYWIN:
|
||||
a = [dst, job["size"], (int(time.time()), int(job["lmod"]))]
|
||||
self.lastmod_q.put(a)
|
||||
# self.log("--- " + wark + " " + dst + " finish_upload atomic " + dst, 4)
|
||||
atomic_move(src, dst)
|
||||
|
||||
a = [job[x] for x in "ptop wark prel name lmod size addr".split()]
|
||||
a += [job.get("at") or time.time()]
|
||||
if self.idx_wark(*a):
|
||||
# self.log("pop " + wark + " " + dst + " finish_upload idx_wark", 4)
|
||||
del self.registry[ptop][wark]
|
||||
# in-memory registry is reserved for unfinished uploads
|
||||
if ANYWIN:
|
||||
a = [dst, job["size"], (int(time.time()), int(job["lmod"]))]
|
||||
self.lastmod_q.put(a)
|
||||
|
||||
dupes = self.dupesched.pop(dst, [])
|
||||
if not dupes:
|
||||
return
|
||||
a = [job[x] for x in "ptop wark prel name lmod size addr".split()]
|
||||
a += [job.get("at") or time.time()]
|
||||
if self.idx_wark(*a):
|
||||
# self.log("pop " + wark + " " + dst + " finish_upload idx_wark", 4)
|
||||
del self.registry[ptop][wark]
|
||||
# in-memory registry is reserved for unfinished uploads
|
||||
|
||||
cur = self.cur.get(ptop)
|
||||
for rd, fn in dupes:
|
||||
d2 = os.path.join(ptop, rd, fn)
|
||||
if os.path.exists(d2):
|
||||
continue
|
||||
dupes = self.dupesched.pop(dst, [])
|
||||
if not dupes:
|
||||
return
|
||||
|
||||
self._symlink(dst, d2)
|
||||
if cur:
|
||||
self.db_rm(cur, rd, fn)
|
||||
self.db_add(cur, wark, rd, fn, *a[-4:])
|
||||
cur = self.cur.get(ptop)
|
||||
for rd, fn in dupes:
|
||||
d2 = os.path.join(ptop, rd, fn)
|
||||
if os.path.exists(d2):
|
||||
continue
|
||||
|
||||
self._symlink(dst, d2)
|
||||
if cur:
|
||||
cur.connection.commit()
|
||||
self.db_rm(cur, rd, fn)
|
||||
self.db_add(cur, wark, rd, fn, *a[-4:])
|
||||
|
||||
if cur:
|
||||
cur.connection.commit()
|
||||
|
||||
def idx_wark(self, ptop, wark, rd, fn, lmod, sz, ip, at):
|
||||
cur = self.cur.get(ptop)
|
||||
@@ -1768,7 +1822,13 @@ class Up2k(object):
|
||||
except:
|
||||
cj["lmod"] = int(time.time())
|
||||
|
||||
wark = up2k_wark_from_hashlist(self.salt, cj["size"], cj["hash"])
|
||||
if cj["hash"]:
|
||||
wark = up2k_wark_from_hashlist(self.salt, cj["size"], cj["hash"])
|
||||
else:
|
||||
wark = up2k_wark_from_metadata(
|
||||
self.salt, cj["size"], cj["lmod"], cj["prel"], cj["name"]
|
||||
)
|
||||
|
||||
return wark
|
||||
|
||||
def _hashlist_from_file(self, path):
|
||||
@@ -1811,6 +1871,8 @@ class Up2k(object):
|
||||
|
||||
if self.args.nw:
|
||||
job["tnam"] = tnam
|
||||
if not job["hash"]:
|
||||
del self.registry[job["ptop"]][job["wark"]]
|
||||
return
|
||||
|
||||
suffix = ".{:.6f}-{}".format(job["t0"], job["addr"])
|
||||
@@ -1827,8 +1889,12 @@ class Up2k(object):
|
||||
except:
|
||||
self.log("could not sparse [{}]".format(fp), 3)
|
||||
|
||||
f.seek(job["size"] - 1)
|
||||
f.write(b"e")
|
||||
if job["hash"]:
|
||||
f.seek(job["size"] - 1)
|
||||
f.write(b"e")
|
||||
|
||||
if not job["hash"]:
|
||||
self._finish_upload(job["ptop"], job["wark"])
|
||||
|
||||
def _lastmodder(self):
|
||||
while True:
|
||||
|
@@ -165,6 +165,7 @@ a, #files tbody div a:last-child {
|
||||
.logue {
|
||||
padding: .2em 1.5em;
|
||||
}
|
||||
.logue.hidden,
|
||||
.logue:empty {
|
||||
display: none;
|
||||
}
|
||||
@@ -628,6 +629,9 @@ input.eq_gain {
|
||||
margin-top: .5em;
|
||||
padding: 1.3em .3em;
|
||||
}
|
||||
#ico1 {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -870,8 +874,8 @@ html.light #tree.nowrap #treeul a+a:hover {
|
||||
.opwide>div {
|
||||
display: inline-block;
|
||||
vertical-align: top;
|
||||
border-left: .2em solid #4c4c4c;
|
||||
margin-left: .5em;
|
||||
border-left: .4em solid #4c4c4c;
|
||||
margin: .7em 0 .7em .5em;
|
||||
padding-left: .5em;
|
||||
}
|
||||
.opwide>div.fill {
|
||||
@@ -880,6 +884,10 @@ html.light #tree.nowrap #treeul a+a:hover {
|
||||
.opwide>div>div>a {
|
||||
line-height: 2em;
|
||||
}
|
||||
.opwide>div>h3 {
|
||||
margin: 0 .4em;
|
||||
padding: 0;
|
||||
}
|
||||
#op_cfg>div>div>span {
|
||||
display: inline-block;
|
||||
padding: .2em .4em;
|
||||
@@ -1071,7 +1079,8 @@ a.btn,
|
||||
#rui label,
|
||||
#modal-ok,
|
||||
#modal-ng,
|
||||
#ops {
|
||||
#ops,
|
||||
#ico1 {
|
||||
-webkit-user-select: none;
|
||||
-moz-user-select: none;
|
||||
-ms-user-select: none;
|
||||
@@ -1602,7 +1611,7 @@ html.light #bbox-overlay figcaption a {
|
||||
border-radius: .5em;
|
||||
border-width: 1vw;
|
||||
color: #fff;
|
||||
transition: all 0.2s;
|
||||
transition: all 0.12s;
|
||||
}
|
||||
#drops .dropdesc.hl.ok {
|
||||
border-color: #fff;
|
||||
@@ -1623,6 +1632,16 @@ html.light #bbox-overlay figcaption a {
|
||||
vertical-align: middle;
|
||||
text-align: center;
|
||||
}
|
||||
#drops .dropdesc>div>div {
|
||||
position: absolute;
|
||||
top: 40%;
|
||||
top: calc(50% - .5em);
|
||||
left: -.8em;
|
||||
}
|
||||
#drops .dropdesc>div>div+div {
|
||||
left: auto;
|
||||
right: -.8em;
|
||||
}
|
||||
#drops .dropzone {
|
||||
z-index: 80386;
|
||||
height: 50%;
|
||||
|
@@ -135,6 +135,8 @@
|
||||
have_unpost = {{ have_unpost|tojson }},
|
||||
have_zip = {{ have_zip|tojson }},
|
||||
readme = {{ readme|tojson }};
|
||||
|
||||
document.documentElement.setAttribute("class", localStorage.lightmode == 1 ? "light" : "dark");
|
||||
</script>
|
||||
<script src="/.cpr/util.js?_={{ ts }}"></script>
|
||||
<script src="/.cpr/browser.js?_={{ ts }}"></script>
|
||||
|
@@ -133,8 +133,8 @@ ebi('op_up2k').innerHTML = (
|
||||
var o = mknod('div');
|
||||
o.innerHTML = (
|
||||
'<div id="drops">\n' +
|
||||
' <div class="dropdesc" id="up_zd"><div>🚀 Upload<br /><span></span></div></div>\n' +
|
||||
' <div class="dropdesc" id="srch_zd"><div>🔎 Search<br /><span></span></div></div>\n' +
|
||||
' <div class="dropdesc" id="up_zd"><div>🚀 Upload<br /><span></span><div>🚀</div><div>🚀</div></div></div>\n' +
|
||||
' <div class="dropdesc" id="srch_zd"><div>🔎 Search<br /><span></span><div>🔎</div><div>🔎</div></div></div>\n' +
|
||||
' <div class="dropzone" id="up_dz" v="up_zd"></div>\n' +
|
||||
' <div class="dropzone" id="srch_dz" v="srch_zd"></div>\n' +
|
||||
'</div>'
|
||||
@@ -168,6 +168,15 @@ ebi('op_cfg').innerHTML = (
|
||||
' </td>\n' +
|
||||
' </div>\n' +
|
||||
'</div>\n' +
|
||||
'<div>\n' +
|
||||
' <h3>favicon <span id="ico1">🎉</span></h3>\n' +
|
||||
' <div>\n' +
|
||||
' <input type="text" id="icot" style="width:1.3em" value="" tt="favicon text (blank and refresh to disable)" />' +
|
||||
' <input type="text" id="icof" style="width:2em" value="" tt="foreground color" />' +
|
||||
' <input type="text" id="icob" style="width:2em" value="" tt="background color" />' +
|
||||
' </td>\n' +
|
||||
' </div>\n' +
|
||||
'</div>\n' +
|
||||
'<div><h3>key notation</h3><div id="key_notation"></div></div>\n' +
|
||||
'<div class="fill"><h3>hidden columns</h3><div id="hcols"></div></div>'
|
||||
);
|
||||
@@ -731,6 +740,12 @@ var pbar = (function () {
|
||||
for (var p = 1, mins = adur / 60; p <= mins; p++)
|
||||
pctx.fillRect(Math.floor(sm * p * 60), 0, 2, pc.h);
|
||||
|
||||
pctx.font = '.5em sans-serif';
|
||||
pctx.fillStyle = light ? 'rgba(0,64,0,0.9)' : 'rgba(192,255,96,1)';
|
||||
for (var p = 1, mins = adur / 60; p <= mins; p++) {
|
||||
pctx.fillText(p, Math.floor(sm * p * 60 + 3), pc.h / 3);
|
||||
}
|
||||
|
||||
pctx.fillStyle = light ? 'rgba(0,0,0,1)' : 'rgba(255,255,255,1)';
|
||||
for (var p = 1, mins = adur / 600; p <= mins; p++)
|
||||
pctx.fillRect(Math.floor(sm * p * 600), 0, 2, pc.h);
|
||||
@@ -1347,7 +1362,7 @@ function play(tid, is_ev, seek, call_depth) {
|
||||
mp.au = mp.au_ogvjs = new OGVPlayer();
|
||||
}
|
||||
catch (ex) {
|
||||
return toast.err(30, 'your browser cannot play ogg/vorbis/opus\n\n' + ex +
|
||||
return toast.err(30, 'your browser cannot play ogg/vorbis/opus\n\n' + basenames(ex) +
|
||||
'\n\n<a href="#" onclick="new OGVPlayer();">click here</a> for a full crash report');
|
||||
}
|
||||
attempt_play = is_ev;
|
||||
@@ -1439,7 +1454,7 @@ function play(tid, is_ev, seek, call_depth) {
|
||||
return true;
|
||||
}
|
||||
catch (ex) {
|
||||
toast.err(0, esc('playback failed: ' + ex));
|
||||
toast.err(0, esc('playback failed: ' + basenames(ex)));
|
||||
}
|
||||
setclass(oid, 'play');
|
||||
setTimeout(next_song, 500);
|
||||
@@ -1473,7 +1488,7 @@ function evau_error(e) {
|
||||
|
||||
err += '\n\nFile: «' + uricom_dec(eplaya.src.split('/').slice(-1)[0])[0] + '»';
|
||||
|
||||
toast.warn(15, esc(err + ''));
|
||||
toast.warn(15, esc(basenames(err)));
|
||||
}
|
||||
|
||||
|
||||
@@ -3148,7 +3163,7 @@ var treectl = (function () {
|
||||
|
||||
treectl.goto = function (url, push) {
|
||||
get_tree("", url, true);
|
||||
reqls(url, push);
|
||||
reqls(url, push, true);
|
||||
}
|
||||
|
||||
function get_tree(top, dst, rst) {
|
||||
@@ -3228,12 +3243,12 @@ var treectl = (function () {
|
||||
}
|
||||
|
||||
function reload_tree() {
|
||||
var cdir = get_evpath(),
|
||||
var cdir = get_vpath(),
|
||||
links = QSA('#treeul a+a'),
|
||||
nowrap = QS('#tree.nowrap') && QS('#hovertree.on');
|
||||
|
||||
for (var a = 0, aa = links.length; a < aa; a++) {
|
||||
var href = links[a].getAttribute('href');
|
||||
var href = uricom_dec(links[a].getAttribute('href'))[0];
|
||||
links[a].setAttribute('class', href == cdir ? 'hl' : '');
|
||||
links[a].onclick = treego;
|
||||
links[a].onmouseenter = nowrap ? menter : null;
|
||||
@@ -3276,7 +3291,7 @@ var treectl = (function () {
|
||||
reqls(this.getAttribute('href'), true);
|
||||
}
|
||||
|
||||
function reqls(url, hpush) {
|
||||
function reqls(url, hpush, no_tree) {
|
||||
var xhr = new XMLHttpRequest();
|
||||
xhr.top = url;
|
||||
xhr.hpush = hpush;
|
||||
@@ -3284,7 +3299,7 @@ var treectl = (function () {
|
||||
xhr.open('GET', xhr.top + '?ls' + (treectl.dots ? '&dots' : ''), true);
|
||||
xhr.onreadystatechange = recvls;
|
||||
xhr.send();
|
||||
if (hpush)
|
||||
if (hpush && !no_tree)
|
||||
get_tree('.', xhr.top);
|
||||
|
||||
enspin(thegrid.en ? '#gfiles' : '#files');
|
||||
@@ -4452,6 +4467,9 @@ function reload_browser(not_mp) {
|
||||
makeSortable(ebi('files'), mp.read_order.bind(mp));
|
||||
}
|
||||
|
||||
for (var a = 0; a < 2; a++)
|
||||
clmod(ebi(a ? 'pro' : 'epi'), 'hidden', ebi('unsearch'));
|
||||
|
||||
if (window['up2k'])
|
||||
up2k.set_fsearch();
|
||||
|
||||
|
@@ -135,13 +135,13 @@ var md_opt = {
|
||||
|
||||
(function () {
|
||||
var l = localStorage,
|
||||
drk = l.getItem('lightmode') != 1,
|
||||
drk = l.lightmode != 1,
|
||||
btn = document.getElementById("lightswitch"),
|
||||
f = function (e) {
|
||||
if (e) { e.preventDefault(); drk = !drk; }
|
||||
document.documentElement.setAttribute("class", drk? "dark":"light");
|
||||
btn.innerHTML = "go " + (drk ? "light":"dark");
|
||||
l.setItem('lightmode', drk? 0:1);
|
||||
l.lightmode = drk? 0:1;
|
||||
};
|
||||
|
||||
btn.onclick = f;
|
||||
|
@@ -33,11 +33,11 @@ var md_opt = {
|
||||
|
||||
var lightswitch = (function () {
|
||||
var l = localStorage,
|
||||
drk = l.getItem('lightmode') != 1,
|
||||
drk = l.lightmode != 1,
|
||||
f = function (e) {
|
||||
if (e) drk = !drk;
|
||||
document.documentElement.setAttribute("class", drk? "dark":"light");
|
||||
l.setItem('lightmode', drk? 0:1);
|
||||
l.lightmode = drk? 0:1;
|
||||
};
|
||||
f();
|
||||
return f;
|
||||
|
@@ -80,7 +80,7 @@
|
||||
<a href="#" id="repl">π</a>
|
||||
<script>
|
||||
|
||||
if (localStorage.getItem('lightmode') != 1)
|
||||
if (localStorage.lightmode != 1)
|
||||
document.documentElement.setAttribute("class", "dark");
|
||||
|
||||
</script>
|
||||
|
@@ -258,6 +258,16 @@ html.light #pctl *:focus,
|
||||
html.light .btn:focus {
|
||||
box-shadow: 0 .1em .2em #037 inset;
|
||||
}
|
||||
input[type="text"]:focus,
|
||||
input:not([type]):focus,
|
||||
textarea:focus {
|
||||
box-shadow: 0 .1em .3em #fc0, 0 -.1em .3em #fc0;
|
||||
}
|
||||
html.light input[type="text"]:focus,
|
||||
html.light input:not([type]):focus,
|
||||
html.light textarea:focus {
|
||||
box-shadow: 0 .1em .3em #037, 0 -.1em .3em #037;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
@@ -512,9 +512,13 @@ function up2k_init(subtle) {
|
||||
// chrome<37 firefox<34 edge<12 opera<24 safari<7
|
||||
shame = 'your browser is impressively ancient';
|
||||
|
||||
var got_deps = false;
|
||||
function got_deps() {
|
||||
return subtle || window.asmCrypto || window.hashwasm;
|
||||
}
|
||||
|
||||
var loading_deps = false;
|
||||
function init_deps() {
|
||||
if (!got_deps && !subtle && !window.asmCrypto) {
|
||||
if (!loading_deps && !got_deps()) {
|
||||
var fn = 'sha512.' + sha_js + '.js';
|
||||
showmodal('<h1>loading ' + fn + '</h1><h2>since ' + shame + '</h2><h4>thanks chrome</h4>');
|
||||
import_js('/.cpr/deps/' + fn, unmodal);
|
||||
@@ -525,7 +529,7 @@ function up2k_init(subtle) {
|
||||
ebi('u2foot').innerHTML = 'seems like ' + shame + ' so do that if you want more performance <span style="color:#' +
|
||||
(sha_js == 'ac' ? 'c84">(expecting 20' : '8a5">(but dont worry too much, expect 100') + ' MiB/s)</span>';
|
||||
}
|
||||
got_deps = true;
|
||||
loading_deps = true;
|
||||
}
|
||||
|
||||
if (perms.length && !has(perms, 'read') && has(perms, 'write'))
|
||||
@@ -744,11 +748,14 @@ function up2k_init(subtle) {
|
||||
|
||||
more_one_file();
|
||||
var bad_files = [],
|
||||
nil_files = [],
|
||||
good_files = [],
|
||||
dirs = [];
|
||||
|
||||
for (var a = 0; a < files.length; a++) {
|
||||
var fobj = files[a];
|
||||
var fobj = files[a],
|
||||
dst = good_files;
|
||||
|
||||
if (is_itemlist) {
|
||||
if (fobj.kind !== 'file')
|
||||
continue;
|
||||
@@ -765,16 +772,15 @@ function up2k_init(subtle) {
|
||||
}
|
||||
try {
|
||||
if (fobj.size < 1)
|
||||
throw 1;
|
||||
dst = nil_files;
|
||||
}
|
||||
catch (ex) {
|
||||
bad_files.push(fobj.name);
|
||||
continue;
|
||||
dst = bad_files;
|
||||
}
|
||||
good_files.push([fobj, fobj.name]);
|
||||
dst.push([fobj, fobj.name]);
|
||||
}
|
||||
if (dirs) {
|
||||
return read_dirs(null, [], dirs, good_files, bad_files);
|
||||
return read_dirs(null, [], dirs, good_files, nil_files, bad_files);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -788,7 +794,7 @@ function up2k_init(subtle) {
|
||||
}
|
||||
|
||||
var rd_missing_ref = [];
|
||||
function read_dirs(rd, pf, dirs, good, bad, spins) {
|
||||
function read_dirs(rd, pf, dirs, good, nil, bad, spins) {
|
||||
spins = spins || 0;
|
||||
if (++spins == 5)
|
||||
rd_missing_ref = rd_flatten(pf, dirs);
|
||||
@@ -809,7 +815,7 @@ function up2k_init(subtle) {
|
||||
msg.push('<li>' + esc(missing[a]) + '</li>');
|
||||
|
||||
return modal.alert(msg.join('') + '</ul>', function () {
|
||||
read_dirs(rd, [], [], good, bad, spins);
|
||||
read_dirs(rd, [], [], good, nil, bad, spins);
|
||||
});
|
||||
}
|
||||
spins = 0;
|
||||
@@ -817,11 +823,11 @@ function up2k_init(subtle) {
|
||||
|
||||
if (!dirs.length) {
|
||||
if (!pf.length)
|
||||
return gotallfiles(good, bad);
|
||||
return gotallfiles(good, nil, bad);
|
||||
|
||||
console.log("retry pf, " + pf.length);
|
||||
setTimeout(function () {
|
||||
read_dirs(rd, pf, dirs, good, bad, spins);
|
||||
read_dirs(rd, pf, dirs, good, nil, bad, spins);
|
||||
}, 50);
|
||||
return;
|
||||
}
|
||||
@@ -843,14 +849,15 @@ function up2k_init(subtle) {
|
||||
pf.push(name);
|
||||
dn.file(function (fobj) {
|
||||
apop(pf, name);
|
||||
var dst = good;
|
||||
try {
|
||||
if (fobj.size > 0) {
|
||||
good.push([fobj, name]);
|
||||
return;
|
||||
}
|
||||
if (fobj.size < 1)
|
||||
dst = nil;
|
||||
}
|
||||
catch (ex) { }
|
||||
bad.push(name);
|
||||
catch (ex) {
|
||||
dst = bad;
|
||||
}
|
||||
dst.push([fobj, name]);
|
||||
});
|
||||
}
|
||||
ngot += 1;
|
||||
@@ -859,23 +866,33 @@ function up2k_init(subtle) {
|
||||
dirs.shift();
|
||||
rd = null;
|
||||
}
|
||||
return read_dirs(rd, pf, dirs, good, bad, spins);
|
||||
return read_dirs(rd, pf, dirs, good, nil, bad, spins);
|
||||
});
|
||||
}
|
||||
|
||||
function gotallfiles(good_files, bad_files) {
|
||||
function gotallfiles(good_files, nil_files, bad_files) {
|
||||
var ntot = good_files.concat(nil_files, bad_files).length;
|
||||
if (bad_files.length) {
|
||||
var ntot = bad_files.length + good_files.length,
|
||||
msg = 'These {0} files (of {1} total) were skipped because they are empty:\n'.format(bad_files.length, ntot);
|
||||
|
||||
var msg = 'These {0} files (of {1} total) were skipped, possibly due to filesystem permissions:\n'.format(bad_files.length, ntot);
|
||||
for (var a = 0, aa = Math.min(20, bad_files.length); a < aa; a++)
|
||||
msg += '-- ' + bad_files[a] + '\n';
|
||||
|
||||
if (good_files.length - bad_files.length <= 1 && ANDROID)
|
||||
msg += '\nFirefox-Android has a bug which prevents selecting multiple files. Try selecting one file at a time. For more info, see firefox bug 1456557';
|
||||
msg += '-- ' + bad_files[a][1] + '\n';
|
||||
|
||||
msg += '\nMaybe it works better if you select just one file';
|
||||
return modal.alert(msg, function () {
|
||||
gotallfiles(good_files, []);
|
||||
gotallfiles(good_files, nil_files, []);
|
||||
});
|
||||
}
|
||||
|
||||
if (nil_files.length) {
|
||||
var msg = 'These {0} files (of {1} total) are blank/empty; upload them anyways?\n'.format(nil_files.length, ntot);
|
||||
for (var a = 0, aa = Math.min(20, nil_files.length); a < aa; a++)
|
||||
msg += '-- ' + nil_files[a][1] + '\n';
|
||||
|
||||
msg += '\nMaybe it works better if you select just one file';
|
||||
return modal.confirm(msg, function () {
|
||||
gotallfiles(good_files.concat(nil_files), [], []);
|
||||
}, function () {
|
||||
gotallfiles(good_files, [], []);
|
||||
});
|
||||
}
|
||||
|
||||
@@ -921,7 +938,7 @@ function up2k_init(subtle) {
|
||||
"t0": now,
|
||||
"fobj": fobj,
|
||||
"name": name,
|
||||
"size": fobj.size,
|
||||
"size": fobj.size || 0,
|
||||
"lmod": lmod / 1000,
|
||||
"purl": fdir,
|
||||
"done": false,
|
||||
@@ -946,7 +963,9 @@ function up2k_init(subtle) {
|
||||
|
||||
st.bytes.total += fobj.size;
|
||||
st.files.push(entry);
|
||||
if (uc.turbo)
|
||||
if (!entry.size)
|
||||
push_t(st.todo.handshake, entry);
|
||||
else if (uc.turbo)
|
||||
push_t(st.todo.head, entry);
|
||||
else
|
||||
push_t(st.todo.hash, entry);
|
||||
@@ -1117,7 +1136,7 @@ function up2k_init(subtle) {
|
||||
if (running)
|
||||
return;
|
||||
|
||||
if (crashed)
|
||||
if (crashed || !got_deps())
|
||||
return defer();
|
||||
|
||||
running = true;
|
||||
@@ -1545,15 +1564,18 @@ function up2k_init(subtle) {
|
||||
}
|
||||
else {
|
||||
smsg = 'found';
|
||||
var hit = response.hits[0],
|
||||
msg = linksplit(hit.rp).join(''),
|
||||
tr = unix2iso(hit.ts),
|
||||
tu = unix2iso(t.lmod),
|
||||
diff = parseInt(t.lmod) - parseInt(hit.ts),
|
||||
cdiff = (Math.abs(diff) <= 2) ? '3c0' : 'f0b',
|
||||
sdiff = '<span style="color:#' + cdiff + '">diff ' + diff;
|
||||
var msg = [];
|
||||
for (var a = 0, aa = Math.min(20, response.hits.length); a < aa; a++) {
|
||||
var hit = response.hits[a],
|
||||
tr = unix2iso(hit.ts),
|
||||
tu = unix2iso(t.lmod),
|
||||
diff = parseInt(t.lmod) - parseInt(hit.ts),
|
||||
cdiff = (Math.abs(diff) <= 2) ? '3c0' : 'f0b',
|
||||
sdiff = '<span style="color:#' + cdiff + '">diff ' + diff;
|
||||
|
||||
msg += '<br /><small>' + tr + ' (srv), ' + tu + ' (You), ' + sdiff + '</span></span>';
|
||||
msg.push(linksplit(hit.rp).join('') + '<br /><small>' + tr + ' (srv), ' + tu + ' (You), ' + sdiff + '</small></span>');
|
||||
}
|
||||
msg = msg.join('<br />\n');
|
||||
}
|
||||
pvis.seth(t.n, 2, msg);
|
||||
pvis.seth(t.n, 1, smsg);
|
||||
@@ -1938,7 +1960,7 @@ function up2k_init(subtle) {
|
||||
flag = up2k_flagbus();
|
||||
}
|
||||
catch (ex) {
|
||||
toast.err(5, "not supported on your browser:\n" + ex);
|
||||
toast.err(5, "not supported on your browser:\n" + esc(basenames(ex)));
|
||||
bcfg_set('flag_en', false);
|
||||
}
|
||||
}
|
||||
@@ -1990,6 +2012,15 @@ function warn_uploader_busy(e) {
|
||||
|
||||
|
||||
tt.init();
|
||||
favico.init();
|
||||
ebi('ico1').onclick = function () {
|
||||
var a = favico.txt == this.textContent;
|
||||
swrite('icot', a ? 'c' : this.textContent);
|
||||
swrite('icof', a ? null : '000');
|
||||
swrite('icob', a ? null : '');
|
||||
favico.init();
|
||||
};
|
||||
|
||||
|
||||
if (QS('#op_up2k.act'))
|
||||
goto_up2k();
|
||||
|
@@ -29,18 +29,24 @@ function esc(txt) {
|
||||
}[c];
|
||||
});
|
||||
}
|
||||
window.onunhandledrejection = function (e) {
|
||||
var err = e.reason;
|
||||
try {
|
||||
err += '\n' + e.reason.stack;
|
||||
}
|
||||
catch (e) { }
|
||||
console.log("REJ: " + err);
|
||||
try {
|
||||
toast.warn(30, err);
|
||||
}
|
||||
catch (e) { }
|
||||
};
|
||||
function basenames(txt) {
|
||||
return (txt + '').replace(/https?:\/\/[^ \/]+\//g, '/').replace(/js\?_=[a-zA-Z]{4}/g, 'js');
|
||||
}
|
||||
if ((document.location + '').indexOf(',rej,') + 1)
|
||||
window.onunhandledrejection = function (e) {
|
||||
var err = e.reason;
|
||||
try {
|
||||
err += '\n' + e.reason.stack;
|
||||
}
|
||||
catch (e) { }
|
||||
err = basenames(err);
|
||||
console.log("REJ: " + err);
|
||||
try {
|
||||
toast.warn(30, err);
|
||||
}
|
||||
catch (e) { }
|
||||
};
|
||||
|
||||
try {
|
||||
console.hist = [];
|
||||
var hook = function (t) {
|
||||
@@ -151,7 +157,7 @@ function vis_exh(msg, url, lineNo, columnNo, error) {
|
||||
);
|
||||
document.head.appendChild(s);
|
||||
}
|
||||
exbox.innerHTML = html.join('\n').replace(/https?:\/\/[^ \/]+\//g, '/').replace(/js\?_=[a-zA-Z]{4}/g, 'js').replace(/<ghi>/, 'https://github.com/9001/copyparty/issues/new?labels=bug&template=bug_report.md');
|
||||
exbox.innerHTML = basenames(html.join('\n')).replace(/<ghi>/, 'https://github.com/9001/copyparty/issues/new?labels=bug&template=bug_report.md');
|
||||
exbox.style.display = 'block';
|
||||
}
|
||||
catch (e) {
|
||||
@@ -241,7 +247,9 @@ function import_js(url, cb) {
|
||||
script.src = url;
|
||||
script.onload = cb;
|
||||
script.onerror = function () {
|
||||
toast.err(0, 'Failed to load module:\n' + url);
|
||||
var m = 'Failed to load module:\n' + url;
|
||||
console.log(m);
|
||||
toast.err(0, m);
|
||||
};
|
||||
head.appendChild(script);
|
||||
}
|
||||
@@ -575,14 +583,22 @@ function jcp(obj) {
|
||||
|
||||
|
||||
function sread(key) {
|
||||
return localStorage.getItem(key);
|
||||
try {
|
||||
return localStorage.getItem(key);
|
||||
}
|
||||
catch (e) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function swrite(key, val) {
|
||||
if (val === undefined || val === null)
|
||||
localStorage.removeItem(key);
|
||||
else
|
||||
localStorage.setItem(key, val);
|
||||
try {
|
||||
if (val === undefined || val === null)
|
||||
localStorage.removeItem(key);
|
||||
else
|
||||
localStorage.setItem(key, val);
|
||||
}
|
||||
catch (e) { }
|
||||
}
|
||||
|
||||
function jread(key, fb) {
|
||||
@@ -605,9 +621,9 @@ function icfg_get(name, defval) {
|
||||
}
|
||||
|
||||
function fcfg_get(name, defval) {
|
||||
var o = ebi(name);
|
||||
var o = ebi(name),
|
||||
val = parseFloat(sread(name));
|
||||
|
||||
var val = parseFloat(sread(name));
|
||||
if (isNaN(val))
|
||||
return parseFloat(o ? o.value : defval);
|
||||
|
||||
@@ -617,6 +633,19 @@ function fcfg_get(name, defval) {
|
||||
return val;
|
||||
}
|
||||
|
||||
function scfg_get(name, defval) {
|
||||
var o = ebi(name),
|
||||
val = sread(name);
|
||||
|
||||
if (val === null)
|
||||
val = defval;
|
||||
|
||||
if (o)
|
||||
o.value = val;
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
function bcfg_get(name, defval) {
|
||||
var o = ebi(name);
|
||||
if (!o)
|
||||
@@ -668,6 +697,21 @@ function bcfg_bind(obj, oname, cname, defval, cb, un_ev) {
|
||||
return v;
|
||||
}
|
||||
|
||||
function scfg_bind(obj, oname, cname, defval, cb) {
|
||||
var v = scfg_get(cname, defval),
|
||||
el = ebi(cname);
|
||||
|
||||
obj[oname] = v;
|
||||
if (el)
|
||||
el.oninput = function (e) {
|
||||
swrite(cname, obj[oname] = this.value);
|
||||
if (cb)
|
||||
cb(obj[oname]);
|
||||
};
|
||||
|
||||
return v;
|
||||
}
|
||||
|
||||
|
||||
function hist_push(url) {
|
||||
console.log("h-push " + url);
|
||||
@@ -833,16 +877,7 @@ var tt = (function () {
|
||||
}
|
||||
|
||||
r.init = function () {
|
||||
var ttb = ebi('tooltips');
|
||||
if (ttb) {
|
||||
ttb.onclick = function (e) {
|
||||
ev(e);
|
||||
r.en = !r.en;
|
||||
bcfg_set('tooltips', r.en);
|
||||
r.init();
|
||||
};
|
||||
r.en = bcfg_get('tooltips', true)
|
||||
}
|
||||
bcfg_bind(r, 'en', 'tooltips', r.en, r.init);
|
||||
r.att(document);
|
||||
};
|
||||
|
||||
@@ -905,6 +940,9 @@ var toast = (function () {
|
||||
if (sec)
|
||||
te = setTimeout(r.hide, sec * 1000);
|
||||
|
||||
if (txt.indexOf('<body>') + 1)
|
||||
txt = txt.slice(0, txt.indexOf('<')) + ' [...]';
|
||||
|
||||
obj.innerHTML = '<a href="#" id="toastc">x</a><div id="toastb">' + lf2br(txt) + '</div>';
|
||||
obj.className = cl;
|
||||
sec += obj.offsetWidth;
|
||||
@@ -1046,7 +1084,7 @@ var modal = (function () {
|
||||
}
|
||||
function _confirm(html, cok, cng, fun) {
|
||||
cb_ok = cok;
|
||||
cb_ng = cng === undefined ? cok : null;
|
||||
cb_ng = cng === undefined ? cok : cng;
|
||||
cb_up = fun;
|
||||
html += '<div id="modalb">' + ok_cancel + '</div>';
|
||||
r.show(html);
|
||||
@@ -1162,3 +1200,54 @@ function repl(e) {
|
||||
}
|
||||
if (ebi('repl'))
|
||||
ebi('repl').onclick = repl;
|
||||
|
||||
|
||||
var favico = (function () {
|
||||
var r = {};
|
||||
r.en = true;
|
||||
|
||||
function gx(txt) {
|
||||
return (
|
||||
'<?xml version="1.0" encoding="UTF-8"?>\n' +
|
||||
'<svg version="1.1" viewBox="0 0 64 64" xmlns="http://www.w3.org/2000/svg"><g>\n' +
|
||||
(r.bg ? '<rect width="100%" height="100%" rx="16" fill="#' + r.bg + '" />\n' : '') +
|
||||
'<text x="50%" y="55%" dominant-baseline="middle" text-anchor="middle"' +
|
||||
' font-family="sans-serif" font-weight="bold" font-size="64px"' +
|
||||
' fill="#' + r.fg + '">' + txt + '</text></g></svg>'
|
||||
);
|
||||
}
|
||||
|
||||
r.upd = function () {
|
||||
var i = QS('link[rel="icon"]'), b64;
|
||||
if (!r.txt)
|
||||
return;
|
||||
|
||||
try {
|
||||
b64 = btoa(gx(r.txt));
|
||||
}
|
||||
catch (ex) {
|
||||
b64 = encodeURIComponent(r.txt).replace(/%([0-9A-F]{2})/g,
|
||||
function x(m, v) { return String.fromCharCode('0x' + v); });
|
||||
|
||||
b64 = btoa(gx(unescape(encodeURIComponent(r.txt))));
|
||||
}
|
||||
|
||||
if (!i) {
|
||||
i = mknod('link');
|
||||
i.rel = 'icon';
|
||||
document.head.appendChild(i);
|
||||
}
|
||||
i.href = 'data:image/svg+xml;base64,' + b64;
|
||||
};
|
||||
|
||||
r.init = function () {
|
||||
clearTimeout(r.to);
|
||||
scfg_bind(r, 'txt', 'icot', '', r.upd);
|
||||
scfg_bind(r, 'fg', 'icof', 'fc5', r.upd);
|
||||
scfg_bind(r, 'bg', 'icob', '333', r.upd);
|
||||
r.upd();
|
||||
};
|
||||
|
||||
r.to = setTimeout(r.init, 100);
|
||||
return r;
|
||||
})();
|
||||
|
@@ -162,7 +162,7 @@ brew install python@2
|
||||
pip install virtualenv
|
||||
|
||||
# readme toc
|
||||
cat README.md | awk 'function pr() { if (!h) {return}; if (/^ *[*!#]/||!s) {printf "%s\n",h;h=0;return}; if (/.../) {printf "%s - %s\n",h,$0;h=0}; }; /^#/{s=1;pr()} /^#* *(file indexing|install on android|dev env setup|just the sfx|complete release|optional gpl stuff)|`$/{s=0} /^#/{lv=length($1);sub(/[^ ]+ /,"");bab=$0;gsub(/ /,"-",bab); h=sprintf("%" ((lv-1)*4+1) "s [%s](#%s)", "*",$0,bab);next} !h{next} {sub(/ .*/,"");sub(/[:,]$/,"")} {pr()}' > toc; grep -E '^## readme toc' -B1000 -A2 <README.md >p1; grep -E '^## quickstart' -B2 -A999999 <README.md >p2; (cat p1; grep quickstart -A1000 <toc; cat p2) >README.md
|
||||
cat README.md | awk 'function pr() { if (!h) {return}; if (/^ *[*!#]/||!s) {printf "%s\n",h;h=0;return}; if (/.../) {printf "%s - %s\n",h,$0;h=0}; }; /^#/{s=1;pr()} /^#* *(file indexing|install on android|dev env setup|just the sfx|complete release|optional gpl stuff)|`$/{s=0} /^#/{lv=length($1);sub(/[^ ]+ /,"");bab=$0;gsub(/ /,"-",bab); h=sprintf("%" ((lv-1)*4+1) "s [%s](#%s)", "*",$0,bab);next} !h{next} {sub(/ .*/,"");sub(/[:,]$/,"")} {pr()}' > toc; grep -E '^## readme toc' -B1000 -A2 <README.md >p1; grep -E '^## quickstart' -B2 -A999999 <README.md >p2; (cat p1; grep quickstart -A1000 <toc; cat p2) >README.md; rm p1 p2 toc
|
||||
|
||||
# fix firefox phantom breakpoints,
|
||||
# suggestions from bugtracker, doesnt work (debugger is not attachable)
|
||||
|
@@ -238,7 +238,7 @@ rm have
|
||||
rm -rf copyparty/web/dd
|
||||
f=copyparty/web/browser.css
|
||||
gzip -d "$f.gz" || true
|
||||
sed -r 's/(cursor: ?)url\([^)]+\), ?(pointer)/\1\2/; /[0-9]+% \{cursor:/d; /animation: ?cursor/d' <$f >t
|
||||
sed -r 's/(cursor: ?)url\([^)]+\), ?(pointer)/\1\2/; s/[0-9]+% \{cursor:[^}]+\}//; s/animation: ?cursor[^};]+//' <$f >t
|
||||
tmv "$f"
|
||||
}
|
||||
|
||||
@@ -271,7 +271,7 @@ find | grep -E '\.css$' | while IFS= read -r f; do
|
||||
}
|
||||
!/\}$/ {printf "%s",$0;next}
|
||||
1
|
||||
' <$f | sed 's/;\}$/}/' >t
|
||||
' <$f | sed -r 's/;\}$/}/; /\{\}$/d' >t
|
||||
tmv "$f"
|
||||
done
|
||||
unexpand -h 2>/dev/null &&
|
||||
|
@@ -9,7 +9,7 @@ import subprocess as sp
|
||||
to edit this file, use HxD or "vim -b"
|
||||
(there is compressed stuff at the end)
|
||||
|
||||
run me with any version of python, i will unpack and run copyparty
|
||||
run me with python 2.7 or 3.3+ to unpack and run copyparty
|
||||
|
||||
there's zero binaries! just plaintext python scripts all the way down
|
||||
so you can easily unpack the archive and inspect it for shady stuff
|
||||
|
2
setup.py
2
setup.py
@@ -114,7 +114,7 @@ args = {
|
||||
"install_requires": ["jinja2"],
|
||||
"extras_require": {"thumbnails": ["Pillow"], "audiotags": ["mutagen"]},
|
||||
"entry_points": {"console_scripts": ["copyparty = copyparty.__main__:main"]},
|
||||
"scripts": ["bin/copyparty-fuse.py"],
|
||||
"scripts": ["bin/copyparty-fuse.py", "bin/up2k.py"],
|
||||
"cmdclass": {"clean2": clean2},
|
||||
}
|
||||
|
||||
|
@@ -48,7 +48,8 @@ class Cfg(Namespace):
|
||||
mte="a",
|
||||
mth="",
|
||||
hist=None,
|
||||
no_hash=False,
|
||||
no_idx=None,
|
||||
no_hash=None,
|
||||
css_browser=None,
|
||||
**{k: False for k in "e2d e2ds e2dsa e2t e2ts e2tsr".split()}
|
||||
)
|
||||
|
@@ -23,7 +23,8 @@ class Cfg(Namespace):
|
||||
"mte": "a",
|
||||
"mth": "",
|
||||
"hist": None,
|
||||
"no_hash": False,
|
||||
"no_idx": None,
|
||||
"no_hash": None,
|
||||
"css_browser": None,
|
||||
"no_voldump": True,
|
||||
"no_logues": False,
|
||||
|
@@ -3,6 +3,7 @@ import sys
|
||||
import time
|
||||
import shutil
|
||||
import jinja2
|
||||
import threading
|
||||
import tempfile
|
||||
import platform
|
||||
import subprocess as sp
|
||||
@@ -28,7 +29,7 @@ if MACOS:
|
||||
# 25% faster; until any tests do symlink stuff
|
||||
|
||||
|
||||
from copyparty.util import Unrecv
|
||||
from copyparty.util import Unrecv, FHC
|
||||
|
||||
|
||||
def runcmd(argv):
|
||||
@@ -132,8 +133,10 @@ class VHttpConn(object):
|
||||
self.log_src = "a"
|
||||
self.lf_url = None
|
||||
self.hsrv = VHttpSrv()
|
||||
self.u2fh = FHC()
|
||||
self.mutex = threading.Lock()
|
||||
self.nreq = 0
|
||||
self.nbyte = 0
|
||||
self.ico = None
|
||||
self.thumbcli = None
|
||||
self.t0 = time.time()
|
||||
self.t0 = time.time()
|
||||
|
Reference in New Issue
Block a user