Welcome Stranger to OCC!Login | Register

Video Capture & Edit Guide

» Discuss this article (7)

Capture and Editing Methods:

(Original, Current, Alternate)


Original Method - ShadowPlay and Freemake Video Converter:


This is going to be a very short section because there is not much to cover. Originally I used ShadowPlay to capture video into 10 minute long files with a 50 Mbps bitrate. I would then load this into the Freemake Video Converter and edit them down. You do this by pressing the Edit button on the right side of the UI, bringing up the editor window. Here you can play the video, step through frames, and place where you want to start and end a cut. When you actually do cut, it removes the section you selected.

Once I had the videos cut the way I wanted, I would encode them to mp4. The options here would be to use a 1920x1080 frame size, the h.264 codec with the original framerate, and a bitrate of 15 Mbps (15000 Kbps). The audio settings are to keep the original channels and sample rate with the AAC codec.

Not much to it, except that if I wanted to capture a screenshot from a video, I would open it in VLC Media Player and use its Take Snapshot option under Video. Not an ideal method but it worked.




New Method – OBS and FFmpeg:


I already went through how I capture video with OBS, so this will just look at my process for editing it. First I open up the video in Freemake to take advantage of its editing window, even if I do not do any of the editing with Freemake any more. Any other piece of software that can open the original videos will also work. Once I know where I want to cut a video, I turn to the Cut.bat Batch file I made for FFmpeg.


There are a few ways to tell FFmpeg how to cut video, so I have this file set up to use whichever method is appropriate, based on what I tell it. This is why there are so many IF statements in it. At the top of the file is where I set the name of the output subfolder: Cut. Past the variables and some comments is an 'if NOT EXIST' statement, which checks if the subfolder exists, and makes it if necessary. This part is present in many of the batch files, as I said in the earlier section.

After setting the folder name I have the variables to and length set to 00:00:00. This is because of the IF statements I have later on for changing how FFmpeg cuts the video. The IF statements check to see if those values are 00:00:00, and if they are it goes on to a different method of cutting. The next variable set, start, is the time for when FFmpeg should start the cut. In the image it says 00:04:06, so it should cut from four minutes and six seconds on (removing the earlier four minutes and six seconds).

The next two variables are both commented out because the video I was editing I wanted to cut all the way to the end of. If I did not want to do that, then I would uncomment whichever variable I wished to use and set its value accordingly.

The first commented variable is to and it is set up to tell FFmpeg to cut to that time in the video. Currently it says 00:04:59, so if it were not commented out FFmpeg would cut from four minutes and six seconds to four minutes and 59 seconds, creating a 53 second video.

The second commented variable is length, which tells FFmpeg the desired duration of the output video. This is useful for the Mod Slide videos I have made, because I want the videos that go into them to be the same length.

Both of these variables can be given decimals at the end, for it you want to cut to half seconds, which I have noted in a comment a few lines below the variable.

The third variable here is frames. Though I rarely ever use it, it is possible to tell FFmpeg to cut based on a number of frames. I have it set to 0 here for the same reason to and length are set to 00:00:00 at the top.

Now we come down to the actual FFmpeg commands, which are contained within IF statements because of the different ways FFmpeg can cut video. The first method is the one I most use and says:

ffmpeg –i "%~1" –ss %start% –to %to% –c copy –map 0 "%~dp1%folder%\%~nx1"


That first part identifies the input file with the input flag –i. Remember that %~1 is the Batch Parameter Variable for the first file dropped onto the batch file. It is in quotes because there may be spaces in the path and filename.

The second part with the -ss flag tells FFmpeg to seek to the time identified with the start variable. The -to flag tells it when to stop, as I explained above.

The final part of the command tells FFmpeg to copy the codecs (–c copy) of the first video (–map 0) to the output location. I am not sure if –map 0 is actually needed, but it does not hurt and I think it fixed problems I was having when I first wrote the file. If I wanted to copy only one codec, like the audio, the flag would be –c:a copy. With %~dp1%folder%\%~nx1 I am telling it the full output path, including the Cut subfolder, and the desired name and extension, which is the same as the original file. If I wanted to, I could have the extension set to .mp4 and FFmpeg will copy the video to that container, but that is not necessary at this point.

By using –c copy, the output video will be identical to the original, but shorter, matching the cuts I wanted to make.

You may notice that in two of the cut commands the –ss flag is before the input file is given, but for one it is after. The reason for this is that setting the seek time ahead of the input makes the seeking more accurate. This way FFmpeg will seek to that specific timecode, but with the –ss flag after the input it seeks to the nearest keyframe, so it can start at a different time than specified. This can cause syncing issues with the audio, but thankfully the OBS recordings do not exhibit this. The reason for the one command with –ss after the input is because the -to flag only works immediately following the -ss flag and cannot go ahead of the input.

Now that I have the video cut, I can re-encode it with FFmpeg, and for that I use the Re-encode - Max Rate.bat file I have. This is a more complicated file than I had been using but I encountered an amusing issue a few times. I capture at 30 Mbps with OBS but there have been some videos FFmpeg wanted to encode at higher bitrates than that, so I made this file to keep a cap of 30 Mbps on the output.


Like the Cut.bat file, I set the name of the subfolder first as Re-encode. The only other variable I set is the (bit)rate and it is 30000k, which is the format FFmpeg wants for 30 Mbps. (Okay, technically there is a small difference between 30000 Kbps and 30 Mbps because one megabit is 1024 kilobits, but that is not enough to matter.)

After the variables are set and the subfolder is made we come to the FFmpeg command which has quite a bit going on in it:

ffmpeg –i "%~1" –vf scale=–1:'min(ih,1080)' –crf 18 –maxrate %rate% –bufsize %rate% –preset slower –movflags faststart "%~dp1%folder%\%~n1.mp4" –n


Like in the Cut file, we identify the input video at the beginning. The next part calls up the video filter scale (with the -vf flag). I have this here because I record at 2048x1152, which not everything likes to work with. This command scales it down to have just 1080 horizontal lines, if the original video has more than that, but maintains the original aspect ratio. The min(ih,1080) operation compares the input height (ih) of the video with 1080, and returns whichever is smaller. The first part of the scale filter asks for the number of vertical lines, and by setting it to –1 it will set the value to whatever will maintain the original aspect ratio with the new number of horizontal lines. (So if I recorded a 4:3 game at 1536x1152, it would be scaled down to 1440x1080, but a 1920x1080 video will not be affected.)

The -crf flag sets the Constant Rate Factor for the output videos. There are a few ways to set the quality of an h.264 video. Freemake used a constant bitrate but FFmpeg can intelligently vary the bitrate, and CRF is one way to do that. It works by varying the quantization parameter (QP), which sets how much information can be thrown away (which is why lower values result in higher qualities and vice versa). On its own, using the -q flag to set the QP value would lead to a variable bitrate, because not all scenes require as much data to maintain quality. With CRF though, the QP value is varied from sequence to sequence, based on the action it contains. When a sequence is not very active, the QP is lower and less information will be thrown away, because this is when the eye is most likely to notice any issues. When the sequence is very active though, more information can be thrown away, as this is when the eye is less likely to catch any compression artifacts and the QP value is increased. The effect is just to intelligently vary the quality based on the action, without throwing away too much data. I am not certain if this is accurate, but here is a way to think of it: a constant QP means that mathematically all scenes are the same quality, while CRF (varied QP) means that visually all scenes are the same quality. This allows for a greater variance in bitrate, and thus can create smaller files without compromising what you see.

I use a CRF of 18 for the review videos, which is considered to be visually lossless, and I am happy with it.

The next two flags, -maxrate and –bufsize, are for setting the maximum bitrate of the output video, which overrides the CRF setting. The -maxrate flag does exactly what you expect, but only works if a -bufsize is set. You can use the same value for both flags, as I do.

The next flag is the -preset flag and is very important. When encoding an h.264 video, you can tell the encoder how quickly it should run. If you have the encoder go faster, the resulting file may look worse and be larger (if you are using a variable system like CRF) than if the encoder was told to go slower. In other words, by telling the encoder to go slower it will be able to encode more efficiently. Ironically I am using the slower preset here, but there are other presets like fast, medium, slow, veryslow, veryfast, and more. I have seen some places state that using a slower preset will result in smaller file sizes, but when I was first experimenting with these presets, I did not always find that to be true. I did consistently find the slower preset(s) to result in a better quality video though, even if the CRF was the same, which is why I describe these presets as more efficient. Of course slower presets take longer to encode.

The -movflags faststart command is not necessary, but useful when creating an mp4 file. The mp4 container places the metadata of the video, or the moov atom data, at the end of the file. What this command does is move that information to the beginning of the file. This is necessary for streaming video, which is probably why YouTube recommends it for uploading.



Alternate Method - ShadowPlay and FFmpeg:


I did not originally intend to write this section, but before finishing this guide I encountered a game that both OBS and ShadowPlay struggled to capture. ShadowPlay was able to produce two usable videos though, but because of how it records I had to adapt my normal FFmpeg practices. I am not sure of the exact cause, but cutting the ShadowPlay videos as I would the OBS videos causes the audio to go out of sync. There are at least three possible explanations I can think of, but all that matters is the solution.

I said when talking about the Cut.bat file that FFmpeg will cut to the nearest keyframe in some situations. Technically that is not true when you are copying codecs because it is always cutting to the nearest keyframe, but puts in the file's metadata to start and stop in between keyframes. (If you want to cut at 3 seconds and the keyframes are at 0, 2, and 4 seconds, it will cut out the keyframe at 0 seconds but keep the one at 2 seconds, because it is used to describe the video at 3 seconds.) This is because codec copying preserves all necessary keyframes and only by re-encoding will new keyframes be created, so to fix the syncing issue, we want to re-encode and cut the video all at once.


To achieve this is really simple as it just requires adding the flags for cutting to the re-encode command I described earlier. There is a little more to what I did than that though. When I am cutting OBS recordings, I will also keep a text file listing the original file name, the final file name (what you see on YouTube), and the cuts made in it. This way I can return to the original video and make a new cut, if I wish to, but will otherwise work off of the cut version, so if the re-encoding went badly I will work off of the cut version instead of the original. Cutting and re-encoding at the same time however, removes these intermediary videos from the process. To simplify things for me, I created individual batch files for each ShadowPlay video, so I can easily re-run the batch files if something goes wrong.

As you can see in this batch file for an Assassin's Creed Syndicate video, I set file and name variables. The file variable is the original file name ShadowPlay gave the video and the name variable is the final name I give it. Now by keeping this batch file in the same folder as the original video, I can just run the file and it will cut and re-encode the video for me as it infers the batch file's path is the same as the input and output. Not quite as efficient as the drag-and-drop system I can use with the OBS recordings, but it does what I want and will work for you. (It is much more efficient for me to cut the videos then drag a bunch onto a re-encode batch file and leave it overnight than to run a batch file for each video, waiting before moving on to the next video.) The -y flag at the end of the command tells it to overwrite without checking with me, if there is already a file with the same name as the output in the folder. An -n flag would tell it to not overwrite.

  1. Video Capture & Edit Guide - Introduction and Software
  2. Video Capture & Edit Guide - Capture and Editing Methods
  3. Video Capture & Edit Guide - Miscellaneous FFmpeg Uses
Related Products
Random Pic
© 2001-2018 Overclockers Club ® Privacy Policy
Elapsed: 0.1377007961   (xlweb1)