Video Capture & Edit GuideGuest_Jim_* -
» Discuss this article (7)
I have mentioned in a few of the reviews I have written that I have changed the process by which I capture and edit gameplay video. Since I mentioned it and it may be a topic some of you are interested in or curious about, I thought it would be a good idea to put something together about it.
Originally I used NVIDIA ShadowPlay (external link) to capture video. It is a fairly useful utility included with the GeForce Experience software suite from NVIDIA, but it does have some drawbacks. For actual editing I had been using the Freemake Video Converter (external link), which had been able to use CUDA to accelerate h.264 encoding. It has since lost that capability, but is still useful, if slower.
Due to some of the limitations of ShadowPlay I have switched over to using Open Broadcaster Software (external link) for video capture. It is a bit more advanced and complicated than ShadowPlay, but it is not too difficult to get set up. For editing I use FFmpeg (external link), which is actually a command line tool for working with video, so I have put together a number of batch files to make things easier on me. Like OBS, it is a bit trickier to work with, but it is very powerful and I have found it worth learning.
All of this software is free, by the way. The only technical exception would be NVENC, the hardware encoder powering ShadowPlay and what I use in OBS. It requires a supporting NVIDIA GPU to use, but there are other encoders for OBS, but I am less familiar with them.
I think that is enough for an introduction, so time to get to the details.
ShadowPlay is a part of the GeForce Experience utility NVIDIA put together to auto–configure and recommend settings for games, as well as provide automatic driver updating. Not really a big fan of either of those features, but ShadowPlay serves its purpose and serves it well. The only issues ShadowPlay really has, based on my experience, all stem from its simple/easy–to–use design.
To activate ShadowPlay you have to open up the GeForce Experience menu and hit the ShadowPlay button in the upper right. This brings up a separate window with a toggle switch on the left for actually turning it on and off. Back in GeForce Experience there are a number of settings, such as hotkeys and the save location for videos, but the more pertinent ones for this guide are all in this new window. (Curiously there is no direct way to open this window provided to you. If you want to, make a shortcut to GeForce Experience and add at the end of the Target field the flag "-shadowplay". Now that shortcut will directly open up the ShadowPlay window.)
There are four large buttons for the settings we are interested in here. The first button is for setting the recording mode. You can have it set to only manually record, so you have to start and stop it yourself, shadow record, where it keeps a buffer that can be saved with a hotkey, both of these together, and streaming to Twitch. (YouTube streaming is supposed to be coming, but is not available in the version I have installed currently.)
The second button is for setting the length of time for the Shadow buffer. I used 10 minutes because most things seemed to take less time than that.
The third button is for quality, which has Low, Medium, and High presets, and a Custom option. The Custom option allows you to set bitrate from 10 Mbps to 50Mbps, framerate of 30 or 60, and resolution.
The final button is for audio settings, which are no audio, in-game audio (which is just your computer's output, so it will capture more than just the game), and in-game audio with your microphone.
One useful and annoying feature of ShadowPlay is that it will start recording when you open a fullscreen application, like a game. This is useful because it will not constantly keep a Shadow buffer, unless you have Allow Desktop Capture enabled in GeForce Experience. The catch is that non–fullscreen games, like those in borderless windows, will not be captured by ShadowPlay (without Desktop Capture). For most games, this is not that big of an issue, though I do personally prefer borderless window in case the game crashes. Some games I have do not offer a fullscreen option, but that is pretty rare.
Another issue with this fullscreen activation is that if a game crashes, you will not be able to save the Shadow buffer, because it gets wiped when ShadowPlay deactivates. This I chalk up to the simple/easy–to–use design over a more resilient design.
Something else worth mentioning is that the Shadow buffer and manually recording systems do not play together, and even the Shadow buffer does not connect up very well. What I mean by the former comment is that you cannot, with ShadowPlay, join the Shadow buffer and manual recording together. If you activate manual recording and save the buffer, you will have two separate videos with overlapping content. The latter comment is that every time you save the Shadow buffer, it creates a new one, and starting a new buffer takes a little time, so if I saved a 10 minute buffer and then hit the hotkey again a minute later, I would have a 10 minute video and a 1 minute video with a brief gap between them. These behaviors do not represent issues in most cases, but are worth being aware of.
Powering ShadowPlay is NVENC, which is an h.264 encoder NVIDIA has built directly into the GPU of its' Kepler and newer GPUs. Some newer GPUs also offer newer and more capable versions of NVENC, but all the versions will get the job done for most people. NVENC can be accessed by other pieces of software, including OBS, so I still use it even though I have moved away from ShadowPlay.
While NVENC is very useful for capturing real–time gameplay, it does come at the cost of compression. When I was using ShadowPlay I would record at 50 Mbps and OBS I have set to record at 30 Mbps because it struggles at 50 Mbps. These high bitrates are because NVENC sacrifices efficiency for speed, so if you do not throw more data at the videos, the quality will noticeably degrade. (Not something I want to accept for reviews.) Re-encoding the videos with Freemake or FFmpeg can significantly reduce the bitrate without sacrificing quality because they are more efficient encoders.
Freemake Video Converter I first found years ago when I was looking for a video converter that would take advantage of CUDA acceleration. It has since lost that capability but it is still an effective editor with a decent UI. (As I recall, NVIDIA has chosen to deprecate CUDA accelerated encoding in favor of using NVENC for accelerated encoding, and Freemake does not support it at this time.) I do still use Freemake today, but only for finding the points in a video I want to cut at. For all of the actual video editing, I now exclusively use FFmpeg. It is able to accept and export many formats and codecs, and can even rip DVDs and some online videos.
FFmpeg is a command line video editing utility that is very powerful, somewhat common, and can be tricky to learn. The actual commands I use for it I will cover later. This section is just for discussing the software.
After you download a build of FFmpeg, or compile it yourself, you will need to add an Environment Variable pointing to it. If you download a Windows build, there should be a batch file in the folder labeled 'ff–prompt.bat' that will do this for you. If you want to or have to do this manually though, go to your System Properties and select 'Advanced system settings' on the left side. The window it opens up should have an Environment Variables button near the button. Edit the Path variable in either the User or System variables and add the path to the FFmpeg/bin folder. (For me this is C:\Program Files\ffmpeg\bin.)
You may want to check the Path variable for any other FFmpeg references. For a while I was confused because my computer kept using an older version of FFmpeg than I had installed. It turned out another piece of software (PCMark I think, but I might be mis–remembering) installed a version of FFmpeg for it to use, and added it to the Path variable. As this was listed earlier in the variable list, that version kept getting called instead of the version I installed and wanted. Easy enough to fix, once you figure it out.
There is a lot you can do with FFmpeg, and I have done more with it than I am going to cover in this guide, so I do encourage you to seek out other resources for more information.
To save myself some time later on, I want to talk about two things consistent across the batch files I have made for FFmpeg commands. First are Batch Parameter Variables, which are amazingly useful. These are what allow me to drag and drop files onto the batch files to execute them. The variable %~1 references the first file dropped, with %~2 referencing the second and so on. You can move down through the list with the SHIFT command, as it assigns %~2 to %~1 and %~3 to %~2, and you get the idea. This allows all of the uses of %~1 to still work while pointing to a new file. If you SHIFT through the last file, %~1 will be empty, so a quick IF statement can check that and exit the batch file.
By modifying the parameter variable you can get different pieces of information out of them. The main ones I use are d, for drive, p for path (without the drive), n for file name, and x for file extension (including the period). You can combine these modifiers too, so if you want the full path to the file, you use %~dp1 or the full filename you want %~nx1.
Two other things I do fairly often in the batch files is set useful variables near the top, so I do not have to hunt in the FFmpeg commands to change them, and have the files create subfolders to save the output files to. This prevents me from altering the original video file until I am done with it. Something else I do that is probably not the best form is I do not specify the output codec or the encoder. To do this I would need to add –c:v libx264 to the command, setting the encoder, but FFmpeg already assumes this, which is why it still works without that flag. (I believe it is making this assumption because the input codec is always h.264 and libx264 is what FFmpeg uses to encode that codec.)
There are also some quirky things I have done within the batch files, but I should leave those until I actually cover the files themselves.
Open Broadcaster Software (OBS) is a much more powerful solution for capturing video than ShadowPlay, but comes at the cost of being more complex. It takes far more than just a few button presses to record video with OBS. For starters you have to create a scene, which is as easy as right–clicking in the Scenes box of the main UI and selecting Add Scene. Next you need to add a source to the scene, and here things get a bit more complicated.
You have several options for the type of source you can add, including text, images, windows, and Game Capture. Naturally this last option is the one we are most interested in, but if it does not work, use Window Capture or Monitor Capture. Game Capture is the preferred source for games because it actually hooks into the graphics API to facilitate more efficient capture. It does have the limitation of needing the game to use DirectX 8 and above (though I am not sure about DirectX 12) or OpenGL. I cannot think of a game in my library that would not be supported, but in case you have one or have stability issues, use Window or Monitor Capture.
When you add a new Game Capture source you will have a number of options. At the top of the window is a dropdown list of the applications OBS can hook into, so if the game you want to capture is already open, it should be listed here. If you cannot get to this window when the game is open, if it is in fullscreen and really hates losing focus, you can select the Use Hotkey option to select the active window. This way you can set up OBS for the new source, open the game, and then hit the hotkey to have OBS hook into it. I have little experience with this method in part because I do not care if the game crashes when I Alt-Tab at the main menu. So long as the window is still up, OBS can identify it as a source.
Beneath the source selection options you will see several options and most of them I have never bothered with. The one I do use is 'Stretch image to screen,' and by screen it means the capture window. You can set OBS to record at any given resolution, from standards like 1280x720 and 1920x1080 to something completely arbitrary. The stretch option will stretch the source to fill that resolution and I use it mainly so I do not have to edit the scene as much.
On the main UI there is a button that says Edit Scene, which will be grayed out unless you are recording or have a preview stream going. With Edit Scene you can change the size, placement, and even the crop of the source. If you are just capturing gameplay, you will not need to mess with this, except to make sure the source fills the window (which is why I use the stretch option I mentioned above).
That is how you set up OBS to capture a source, but we still need to look at the settings, such as encoder, bitrate, and more. As I said, OBS is powerful but more complicated.
One useful feature with OBS is the ability to have multiple setting profiles that you can swap between. I actually have several because sometimes I neither need nor want very high quality recordings, or the profiles are for streaming. They are simple to create by changing the name in the Settings window and pressing Add, or by using New or Duplicate in the Profiles menu from the main UI.
The meaty settings you want to mess with are on the Encoding, Broadcast Settings, Video, and possibly Advanced pages. To keep this guide from exploding in size, I am going to just focus on the most important options. You can always visit the OBS website or forums for more information.
The Encoding page is where you set the encoder, and the options are x264, Quick Sync, and NVIDIA NVENC. The x264 option is a software encoder so you will want a powerful CPU that can handle live encoding while you play a game, but it can be more efficient at a given bitrate. Quick Sync and NVENC are both hardware encoding solutions from Intel and NVIDIA, respectively.
Beneath the Encoder options are the settings for quality, including Max Bitrate. As I said earlier, I use 30 Mbps (30000 Kbps in OBS) so that is the option there, and I also have it set to use CBR, or Constant Bit Rate. I am not sure if that setting has much impact with NVENC, but I use it anyway because I would rather the original recording use too much data than too little.
Below the Video Encoding settings are the Audio Encoding settings and all that really matters there is that I use AAC. The other options you can have set as you wish, or as is recommended elsewhere. (YouTube recommends using AAC and a bitrate of 384 Kbps (192 Kbps x2) for stereo audio.)
The next page is the Broadcast Settings page and for reviews I have the Mode set to File Output Only. This results in just three options being listed: File Path, Replay Buffer length, and Replay Buffer File Path. The file path options should be self–explanatory (and if you mouseover the fields you will get a pop–up of the variables you can use). The Replay Buffer is OBS's equivalent to Shadow recording in ShadowPlay. You set the number of seconds you want it to keep buffered, and OBS tells you about how much RAM that will take. It keeps the buffer in your system's memory. I use a buffer length of 10 minutes, but you can use less, especially if you have less RAM than I do. (Ten minutes takes over 2 GB of memory, but I have 32 GB so I have plenty to spare.)
The Replay Buffer in OBS is never dumped, unlike in ShadowPlay, until you actually stop the buffer. If you hit the hotkey to save the Replay Buffer (which you set on the Hotkeys page) multiple times, you will get multiple files and each one will be the full length of the buffer, so they can overlap. You can also set up a hotkey to start recording from the Replay Buffer, which will save the buffer to a file and then keep recording more onto it. I have not used this capability much, but it is useful to have.
You may notice I have the file outputs set to use the FLV container instead of mp4. This is because many months ago I had issues with OBS crashing, and if it did while recording, an mp4 would be corrupted and I would not be able to recover it. The FLV container is much more resilient, and if OBS crashes the file can still be used. The video is still encoded using h.264, just as if I had saved it as an mp4 and you can save to whichever format you wish. Just keep in mind that FLV is a less common container and not everything will work with it. (Freemake and FFmpeg both do though.)
The Video page is where you set the resolution of the output video, can apply downscaling to it, and even set the FPS. I use my monitor's native resolution of 2048x1152 without any downscaling and an FPS of 60. That is all I have for that page as the other settings I have little reason to touch. (The Video Adapter option is important if your system has multiple GPUs it switches between, like many laptops and the option should match the GPU actually rendering the game.)
The next page I want to talk about a little is the Advanced page. There are a lot of options here that you do not need to worry about and that I am not going to talk about. All I want to mention is that I do have Multithreaded Optimizations enabled, the NVENC preset is High Quality with a High encoding profile, the Keyframe Interval is set to Auto or 0, and I have it set to Constant Frame Rate (CFR). Having CFR enabled is useful for editing videos, in case the editor does not work well with variable framerate videos.
Every other setting I either do not use, is pretty easy to figure out on your own, or is something you may want to look at separately anyway.