You are not logged in.
But aint it easier to use tools like screen or tmux?
Not necessarily, depending on how familiar you are with those tools, and how complex the task is. If you just want to make sure conky isn't killed or something like that, the shell solution is more appropriate IMO.
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
Nice tips there johnraff, i have used disown for quite awhile when testing things out, especially conky.
Offline
Job Control is not enabled in scripts by default - it's mainly intended for interactive shells. But, you can enable it by putting
set -m
at the top of your script, or - maybe safer - just around the part where you launch the background process:
set -m do launch-in-background stuff set +m
Now disown and friends will work the same way they do in a terminal.
Thank you so much!
I, too, stumbled around that.
Next time I need it, I know what to do.
Offline
Make that $i local
This bit me last week. There was a loop, like
for (( i=1; i<10; i++ ))
do
something with $i
function somefile
done
And somefile was being sent to a function like:
function(){
for i in "$@"
do some file thing with "$i"
done
So after the first call to function() the loop got all messed up, because $i had been changed from an integer to a filepath.
Easy fix, at the top of function()
local i
so the i inside function is kept separate from the outside loop. If the loop was a function too, best to do the same there.
Variables inside functions are global by default in Bash, so any you don't want to share elsewhere should be declared local. Well known, but it's easy to forget those little i's inside loops.
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
Wouldn't it be easier to (always try to) use unique variable names?
for file in "$@"
etc.
Offline
^I think it's always advisable to make any variables local inside functions that won't be referred to anywhere else. "File" is a pretty generic name too, that might be used somewhere else. In a short script where you can easily remember all the variable names it's less important of course, but in anything big, where maybe other snippets are being sourced from files... it's good to keep the environment as clean as possible.
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
^That's very interesting.
I'm curious, though, what would be the advantage of using a gdbus call to copy move or rename a file, rather than cp or mv?
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
Transcoding videos to x265 a.k.a. HEVC. The time ratio for a very decent 2-pass transcode is ~12 to 1 on my 4-thread mediocre intel cpu, meaning 10min of video takes roughly 2h to transcode and comes out ~100MB at 1280x720. There's no discernible quality loss compared to an up to 5 times larger x264-encoded video.
So I'm putting a few in the queue before going to work...
After trawling the net for days & trial and erroring I came up with these commands:
ffmpeg_cmd="ffmpeg -hide_banner -analyzeduration 2147483647 -probesize 2147483647"
endoptions="-max_muxing_queue_size 9999"
# about -max_muxing_queue_size:
# stackoverflow.com/questions/49686244/ffmpeg-too-many-packets-buffered-for-output-stream-01
vcodec=x265
# all bitrates in kbps !!!
bitrate=1500
preset=medium # the default
file="$1"
outfile="${file%.*}.${bitrate}k.$vcodec.$preset.mkv"
params="ctu=32:max-tu-size=16:crf=20.0:tu-intra-depth=2:tu-inter-depth=2:rdpenalty=2:me=3:subme=5:\
merange=44:b-intra=1:amp=0:ref=5:weightb=1:keyint=360:min-keyint=1:bframes=8:aq-mode=1:aq-strength=1.0:\
rd=5:psy-rd=1.5:psy-rdoq=5.0:rdoq-level=1:sao=0:open-gop=0:rc-lookahead=80:scenecut=40:max-merge=4:\
qcomp=0.8:strong-intra-smoothing=0:deblock=-2:qg-size=16:pbratio=1.2:vbv-bufsize=1000:vbv-maxrate=$bitrate"
# 1st pass
$ffmpeg_cmd -y -i "$file" -an -c:v libx265 -preset "$preset" -x265-params "pass=1:$params" $endoptions -f matroska /dev/null
# 2nd pass
$ffmpeg_cmd -i "$file" -c:a copy -c:v libx265 -preset "$preset" -x265-params "pass=2:$params" $endoptions "$outfile"
PS: don't play supertuxkart at the same time! Crashed my machine twice.
Otherwise the system remains usable! Transcoding right now.
Offline
Thanks, I will try it.
/Martin
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
^^ addition to my previous post:
The bitrate is not an actual fixed bitrate when doing 2-pass transcoding; additionally x265 makes fixed bitrates obsolete, or so I half understand from reading stuff online; it's a maximum bitrate, the algorithm tries to stay below that at all times.
Even so, it makes sense to adjust the bitrate for lower resolution video. I believe 1000 would be the next step down, have had good results with that even at 1280x720.
Yesterday I also transcoded a 1920x1080 video - counting the pixels, it's almost exactly twice as much as 1280x720. It didn't take twice as long to transcode, but almost. Nevertheless, bitrate=1500 was sufficient.
Anyhow, I made it a script that can take several files, lets the CPU cool down between files, notifies you when finished.
Last edited by ohnonot (2021-07-08 09:15:13)
Offline
Today I finally tested your script and as far as I can see it works. But, should it take this long to transcode? Silly me started to test on "Dr Mabuse der Speiler" which is 4.5 hours long. The transcoding has gone on for ten hours now and has only covered ~11% of the distance. Four cores working full time. This means it would take between 3.5 and 4 days of full CPU load to transcode 4.5 hours of film. Am I doing something wrong?
/Martin
"Problems worthy of attack
prove their worth by hitting back."
Piet Hein
Offline
^ x265?
Yes. A 4.5h film in high resolution on a consumer grade intel - not good. Days ahead.
I didn't get back to this anymore because I realised that it's impossible to create a one-size-fits-all script.
I later continued transcoding my stuff to x264 instead, which gives extremely good results with carefully chosen parameters and 2-pass encoding, with maybe only slightly larger filesize, and much faster.
I think only professionals with dedicated machines can afford to fully reap the benefits of encoding x265.
Here's the x264 version of the script:
#!/bin/bash
# this was only the starting point:
# videoblerg.wordpress.com/2017/11/10/ffmpeg-and-how-to-use-it-wrong/comment-page-1/
# script dependencies
for cmd in ffmpeg ffprobe; do
which "$cmd" >/dev/null || exit 1
done
file="$1"
vcodec=x264
pix_fmt=yuv420p
read -r -p "Enter video bitrate: " vbitrate
#~ vbitrate=1500
abitrate=128
preset=veryslow
# presets: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow
tune=film
# tunes: film,animation, grain, stillimage, psnr, ssim, fastdecode, zerolatency
bufsize=$((vbitrate+abitrate))
maxrate=$((vbitrate + vbitrate/3))
ffmpeg_cmd="ffmpeg -nostdin -hide_banner -analyzeduration 2147483647 -probesize 2147483647"
endopts="-max_muxing_queue_size 9999"
#~ -g 48 maximum keyframe interval? good for streaming, not needed in our use case.
#~ -x264opts no-scenecut see: video.stackexchange.com/a/24684 - despite additional (minimal) overhead, scenecut seems to be important
outfile="${file%.*}.$vcodec.$pix_fmt.v$vbitrate.a$abitrate.$preset.$tune.mp4"
echo ffmpeg -i "$file" -pix_fmt "$pix_fmt" -vsync 1 -vcodec lib$vcodec -b:v: ${vbitrate}k -bufsize ${bufsize}k -maxrate ${maxrate}k -preset $preset -profile:v high -tune $tune -pass 1 -acodec aac -b:a ${abitrate}k -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -f mp4 -y /dev/null
$ffmpeg_cmd -i "$file" -pix_fmt "$pix_fmt" -vsync 1 -vcodec lib$vcodec -b:v: ${vbitrate}k -bufsize ${bufsize}k -maxrate ${maxrate}k -preset $preset -profile:v high -tune $tune -pass 1 -acodec aac -b:a ${abitrate}k -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" $endopts -f mp4 -y /dev/null
echo ffmpeg -i "$file" -pix_fmt "$pix_fmt" -vsync 1 -vcodec lib$vcodec -b:v: ${vbitrate}k -bufsize ${bufsize}k -maxrate ${maxrate}k -preset $preset -profile:v high -tune $tune -pass 2 -acodec aac -b:a ${abitrate}k -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -f mp4 "${file%.*}.testy.$vbitrate.$preset.mkv"
$ffmpeg_cmd -i "$file" -pix_fmt "$pix_fmt" -vsync 1 -vcodec lib$vcodec -b:v: ${vbitrate}k -bufsize ${bufsize}k -maxrate ${maxrate}k -preset $preset -profile:v high -tune $tune -pass 2 -acodec aac -b:a ${abitrate}k -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" $endopts -f mp4 "$outfile"
I cut it down, removing some pointless stuff. It should work, but please tell me if it doesn't.
The biggest question is the target bitrate. It will take some fiddling to find out what preserves quality while still reducing filesize (if that is your goal).
You might save some time by leaving the audio as it is (-b:a copy), if it's in a lossy format already.
Last edited by ohnonot (2020-06-15 19:09:14)
Offline
The biggest question is the target bitrate. It will take some fiddling to find out what preserves quality while still reducing filesize (if that is your goal).
There is also 'crf' mode.
https://slhck.info/video/2017/02/24/crf-guide.html
They say, they say that the way to correctly expand to command from variables is array explosions
https://github.com/brontosaurusrex/sing … X264crfArr
"${encarr[@]}" # actual command constructed here
Last edited by brontosaurusrex (2020-07-13 10:29:38)
Offline
ohnonot wrote:The biggest question is the target bitrate. It will take some fiddling to find out what preserves quality while still reducing filesize (if that is your goal).
There is also 'crf' mode.
https://slhck.info/video/2017/02/24/crf-guide.html
The only way to really compress file size while preserving near-original quality even during high motion scenes is 2-pass encoding. CRF seems to attempt what the first pass does "on the fly" AFAIU; trying to achieve a compromise between quality and speed with a stress on speed? I think i tried various CRF levels, but the quality/size ratio was pretty bad.
Where are you using this and what are your results?
I was able to reduce "HD" video (1280x720 or some such @ 24 or 25 fps) that was already x264 encoded to less than half its size without visible quality loss.
They say, they say that the way to correctly expand to command from variables is array explosions
https://github.com/brontosaurusrex/sing … X264crfArr"${encarr[@]}" # actual command constructed here
This is very good to know.
I have been fighting with this myself, getting variables in to command line options can fail in numerus ways, it also comes up often in other forums.
Yay!
Offline
CRF seems to attempt what the first pass does "on the fly" AFAIU; trying to achieve a compromise between quality and speed with a stress on speed? I think i tried various CRF levels, but the quality/size ratio was pretty bad.
You can crf and see what's the average bitrate and then do the same with 2pass, the quality diff should be small or null.
edit: Short and sweet, and from actual developer https://forum.doom9.org/showthread.php?t=143904
edit2: I guess I'am not clear: Different material will require different average bitrate due to different required complexity. From that you can get that over many inputs and using same 2pass bitrate requirements output quality will differ from clip to clip... So 2pass doesn't make sense in most offline situations.
Where are you using this and what are your results?
I was able to reduce "HD" video (1280x720 or some such @ 24 or 25 fps) that was already x264 encoded to less than half its size without visible quality loss.
My hd 10 bpc cineform masters can be 200 mbits/s (part of my profession is dealing with video editing), obviously for long term storage I prefer ~ 20 mbit/s 8 bpc h.264 version. Linked script is used for that scenario (Quality wise 10 bit version would be better, but I prefer to keep compatibility with browsers). Results are probably not completely transparent.
Mediainfo example
Bit rate : 13.3 Mbps
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 25.000 fps
Another clone of that script is crf23 and used for proxies for intranet use mostly.
Then another one is scaling down and using low-bitrate he-aac audio (this are previews used for sending to coworkers or for approval, over secret channels (wetransfer)), mediainfo looking like
Video
Bit rate : 690 Kbps
Width : 576 pixels
Height : 320 pixels
Audio
ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : HE-AACv2 / HE-AAC / LC
Codec ID : 40
Duration : 38s 16ms
Bit rate mode : Constant
Bit rate : 33.0 Kbps
Using fdkaac for he-aac
fdkaac -p29 -m1 "$tmpdir/$base.wav" -o "$tmpdir/$base.m4a" # -p29 denotes HEAAC V2 type of encoding - SBR+PS
Last edited by brontosaurusrex (2020-07-16 11:16:39)
Offline
^ thanks, I will look into it next time the need arises.
I was able to reduce "HD" video (1280x720 or some such @ 24 or 25 fps) that was already x264 encoded to less than half its size without visible quality loss.
Correction:
from >3000kbps to <1500kbps without visible quality loss.
Offline
^Media manipulation is way outside my comfort zone, but by chance I ran into this post today. Probably all stuff you already know: https://superuser.com/questions/1556953 … e-compared
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
Yeah ^ that is somehow surprising for most people as well, CRF is not comparable when using another speed preset. In other words from preset fast to slow one is now on another 'quality' scale altogether.
edit: I'am acting like I actually deeply understand all this, I don't. Just been doing my share of reading/googling over the years (and using x264 daily for a decade) and feel obliged to spread the word of x264.
CRF mode is not true constant-quality (SSIM, PSNR, or any other metric).
SSIM and PSNR are not true human quality metrics either, so don't know what that statement means. Afaik only eyes are true quality metrics so far. Afaik x264 CRF rate mechanism is as smart/dumb as its 2pass.
edit2: Netflix quality metrics (using machine learning as well, so it must be good ...)
https://github.com/Netflix/vmaf
Last edited by brontosaurusrex (2020-07-16 10:58:21)
Offline
^bookmarked.
Thanks!
...elevator in the Brain Hotel, broken down but just as well...
( a boring Japan blog (currently paused), now on Bluesky, there's also some GitStuff )
Offline
You can crf and see what's the average bitrate and then do the same with 2pass, the quality diff should be small or null.
I now seem to remember reading a blog post that recommended exactly that, but it didn't work for me - using the obtained values gave no reduction in file size.
I suspect that's because I re-encoded already compressed video; using (gigantic) raw video files, this might work better.
So there's a difference between re-encoding sub-optimal input material, and compress-encoding raw material for the first time.
My biggest deciding factor was reduction of file size, while minimizing visible quality loss.
Most use cases are probably the other way round: biggest factor is avoiding visible quality loss, while minimizing file size.
Last edited by ohnonot (2020-07-17 07:57:38)
Offline