I'm running Plex Media Server 1.10.1.4602 on Ubuntu 16.04.
Summary:
Issue 1: Should I be seeing the 1.5x fudge factor in the logs if Deep Analysis has run? Is there anything else I can check there?
Issue 2: Should the 1.5x fudge factor be applied to constant rate encoded audio? The mediainfo
tool reports that my optimized file (from video_transcoding, not Plex's built in Optimize) has a "Bit rate mode" of "Constant" for audio. There are no peaks in CBR, so it seems to me that it should not apply here. But from tests comparing the same encoding settings with and without audio, it seems that the 1.5x is being applied to the audio track bit rate too.
Issue 3: Why is the required bandwidth being inflated by 1.4x on top of the 1.5x fudge factor?
If this is all legitimate... with 160 kbps audio, I need to keep my video's average bit rate to 4000÷1.4÷1.5−160 = 1744 kbps. If it's not and these bugs could be fixed, my video's average bit rate could be (4000−160)÷1.5 = 2560 kbps. That's a significant difference in bit rate, and thus potential quality.
Details:
I would like to pre-transcode my videos to allow for Direct Play to mobile devices. This will allow me to handle more simultaneous streams, and I should be able to get better transcoding quality for a given bitrate by using a slower preset.
I initially used the Optimize feature to optimize a couple videos for mobile. Specifically, I picked "Optimize for Mobile". I expected them to be optimized to 720p @ 4 Mbps, which they were. I set my iPhone 7 (running iOS 11) to a quality level of 720p @ 4 Mbps. I expected this combination to result in Direct Play or at least Direct Stream.
Unfortunately, it didn't. From what I can see, this is because the bandwidth of the videos is too high. I did some tinkering using donmelton's video_transcoding tool, which you can find here if you're curious: https://github.com/donmelton/video_transcoding
I used a 1 minute clip from the start of a movie. This allowed me to make slight adjustments and rapidly test to reverse engineer the bandwidth calculations. It seems that the bandwidth calculations are:
average_bitrate_of_file * 1.5 * 1.4
If this is greater than 4000000 bps, Direct Play is rejected. Looking at the big picture, that makes total sense. This leads to the question: where are the 1.5x and 1.4x coming from?
The 1.5x is the "fudge factor", which I understand is intended to account for the bursts in bitrate in VBR files. From some references, this may have been 2x in the past? Some references seem to indicate that the fudge factor should only be applied until Deep Analysis is run on the files. I believe Deep Analysis has run on the Plex Optimized files (but maybe not my test files while I was doing this), based on the fact that I get requiredBandwidth values if I look at the XML file info with &includeBandwidths=1 added to the URL. Yet I was still seeing the 1.5x fudge factor in the logs.
The 1.5x fudge factor, while not logged as such, also seems to apply to audio. This doesn't make sense for CBR audio.
So, if we have 4000000 bps to work with, from the above, I should expect that a file with a total bit rate of 4000000÷1.5 = 2666666 bps should Direct Play. However, files less than that still do not, and the error message in the logs indicates that the amount of required bandwidth is 1.4 times the 1.5 times the actual bit rate. In other words, I can only Direct Play a file if it is less than 4000000÷1.5÷1.4 = 1904761 bps average bit rate. Where is this 1.4x coming from?
Attached is a log snippet showing an example of the 1.4x calculation. This test clip has no audio track at all. The overall bit rate (mediainfo --Inform="General;%BitRate%" Clip3.mp4) is 2024468. 2024468×1.5 is 3036 kbps. I'm not sure why that's slightly off, but the log shows it as "3031Kbps based on 1.500000x fudge factor." From there, 3031 * 1.4 = 4243, which is approximately the 4245kbps from the error "Required bandwidth is 4245kbps" in the log.