Required packages
NSR - Screen Recorder contains all the files you need to work. You do not need additional packages or tools to use the plugin.
Features
High-Speed:
Engineered and extensively optimized for superior performance.
Capture Everything:
Capture any visual content that can be transformed into a texture, whether it's a gaming interface, user interface, camera feed, or texture.
Tailored Resolutions:
Capture videos with resolutions as high as Full HD (1920x1080) or even higher if your device supports it.
Augmented Reality Support:
The package provides complete compatibility with ARFoundation, ARCore, ARKit, and Vuforia.
Concurrent Recording:
The package ensures thread safety, enabling recording in worker threads to enhance performance even further.
Lightweight Integration:
The API is purposefully designed to minimize unnecessary additions or extra burden on your project.
Components:
Universal Video Recorder |
The main component that creates a connection between the Unity Engine and the screen recording plugins (Android & iOS & macOS & Windows). |
Screen Recorder |
The main component that creates a connection between the Unity engine and the screen recording framework. (Only iOS!) |
Graphic Provider |
The main system that manages subsystems, parameters, and the way images are created and saved. |
Microphone Audio Recorder |
A component that creates a connection between the Unity Engine and audio recording plugins and provides the ability to create audio files (Android & iOS & macOS & Windows). |
Platform support
NSR - Screen Recorder is a cross-platform solution that supports most popular platforms.
iOS/iPadOS version:
Android version:
MacOS version:
Windows version:
Note
To record video in Unity Editor, we use native Unity tools that have been available since Unity 2021.2+.
Limitations
NSR - Screen Recorder has no limitations, all restrictions (resolution of the recorded video) depend on the system on which you plan to create the project.
Samples
NSR - Screen Recorder_Universal |
An example of a scene that demonstrates how the plugin works with Universal Video Recorder and basic settings. |
NSR - Screen Recorder_Dynamic |
An example scene that demonstrates how the plugin works with a Universal Video Recorder and a dynamic example scene. |
NSR - Screen Recorder_iOS_Internal |
An example of a scene that demonstrates how the plugin works with iOS Screen Recorder and basic settings. |
NSR - Screen Recorder_SimpleCamera |
An example of a scene that demonstrates how the plugin works with a Universal Video Recorder and a simple implementation of the device's camera image. |
NSR - Screen Recorder_Audio |
An example of a scene that demonstrates how the plugin works with a Universal Video Recorder and audio sources. |
NSR - Screen Recorder_RenderTexture |
An example of a scene that demonstrates how the plugin works with a Universal Video Recorder and RenderTexture. |
NSR - Screen Recorder_FileManager (Share_SaveToGallery) |
An example of a scene that demonstrates how the plugin works with a custom file manager and the share & save to gallery utilities. |
NSR - Screen Recorder_Record UI |
An example of a scene that demonstrates how the plugin works with the ability to record UI or not. |
NSR - Screen Recorder_Transparent video (alpha channel) |
An example of a scene that demonstrates how the plugin works with the ability to record transparent video in Unity Editor. |
NSR - Screen Recorder_Audio_Only |
An example of a scene demonstrating the operation of a plugin with the ability to record an audio file for a build app. |
NSR - Screen Recorder_Dynamic Camera Changes |
An example of a scene that demonstrates how the plugin works with the ability to dynamically switch cameras when recording video. |
NSR - Screen Recorder_MultiScene |
An example of a scene that demonstrates how the plugin works with the ability to record video for multiple scenes. |
NSR - Screen Recorder_Watermark |
An example of a scene that demonstrates how the plugin works with the ability to add a watermark. |
Install NSR - Screen Recorder
To install NSR - Screen Recorder, simply import the package into your project.
Note
Make sure that the "Add to embedded binaries" option is enabled for the StcCorder.framework, NSR.framework and the StcCorder.bundle library in the editor inspector. (iOS/iPadOS/macOS)
Scene setup
Each scene in your application must have one mandatory GameObjects: Universal Video Recorder (Cross-platform) or Screen Recorder (iOS/iPadOS).
GameObject Universal Video Recorder (Cross-platform) or Screen Recorder (iOS/iPadOS) initializes and controls the plugin's actions on the target platform If one of these GameObjects is missing from the scene, video recording will not work properly.
To create a Universal Video Recorder (Cross-platform) or Screen Recorder (iOS/iPadOS), right-click in the Hierarchy window and choose one of the following options from the shortcut menu.
Silver Tau > NSR > Universal Video Recorder or Screen Recorder (iOS)
After adding the NSR - Screen Recorder to the scene, the hierarchy window will look like the one below.
This is the default scene setup, but you can rename or change the parentage of the GameObjects to suit your project's needs.
To save files on the iOS/iPadOS platform to Photos, you need special permissions. You can edit the description of the permissions or remove them (clear the input fields) as needed.
If you want to record screen content without UI content you need to (Build-in):
- UI Canvas should display content on top of the camera. To do this, set the Canvas settings in the renderer to mod - Screen Space - Overlay. This allows you not to add UI to the content that your main camera renders on the stage.
- Set the recording layers you need in the Universal Video Recorder script. For example, remove the UI layer.
The entire screen recording is ready for you to record the screen of the device without the content of UI components.
Universal Render Pipeline
No special settings are required when using the Universal Render Pipeline. The whole process is automated.
If you want to record screen content without UI content you need to:
- UI Canvas should display content on top of the camera. To do this, set the mod - Screen Space - Overlay in the canvas settings in the renderer.
- Add a camera to the main camera stack that will render only the UI layer.
- Set the recording layers you need in the Universal Video Recorder script. For example, remove the UI layer.
The entire screen recording is ready for you to record the screen of the device without the content of UI components.
Note
FIf you plan to use the Universal Render Pipeline and the example scenes, don't forget to change the materials from the "Standard" shader to "Lit".
Post Processing Stack
The NSR - Screen Recorder plugin is fully compatible with the Post Processing Stack Unity Engine for any type of rendering.
If you use an extended range of colors, such as the Bloom effect or Emissive Materials, be sure to enable the "HDR" option for Universal Video Recorder (Cross-platform) or Graphic Provider (Cross-platform).
Note
HDR (high dynamic range) is used for a wide range of colors. Use this option when using HDR colors for materials or post-processing (e.g., Bloom effect, Emissive Materials, etc).
Universal Video Recorder (Cross-platform)
The main component that creates the connection between the Unity engine and the NSR framework (library).
This is a cross-platform script that provides the ability to record video (Android & iOS & macOS & Windows).
Actions
Use this to create some dynamic functionality in your scripts. Unity Actions allow you to dynamically call multiple functions.
onCompleteCapture |
Action that is called after a video is successfully created. |
onOutputVideoPathUpdated |
Action that is called after a video is successfully created and returns the updated output path of the video file. |
onFrameRender |
An action that is triggered every frame when recording a video. This frame is provided before it enters the video record, so it can be modified. |
Tip
If you notice a strong drop in frame rate while recording video on the target device, we recommend using the "frameSkip" parameter and setting it to 2.
You can use RenderTexture to record video. When you record video using this method, the video resolution and settings will depend on the render texture.
Tip
If the target recording camera has an active RenderTexture, video recording will automatically start using RenderTexture. This is done to automatically prevent video recording errors.
To record video using RenderTexture, you need to enable the "Use Render Texture" option and specify a render texture to be recorded in Universal Video Recorder.
Note
If you use the extended HDR color range to write to RenderTexture, remember to set the color format you need for rendering the texture, for example, R32G32B32A32_SFLOAT.
You can control the video resolution depending on the device's resolution using the Video Resolution enum.
Tip
If the video is portrait or landscape, the resolution will be calculated automatically.
Original |
Device screen resolution |
UHD |
2160p (3840 x 2160) |
QHD |
1440p (2560 x 1440) |
FHD |
1080p (1920 x 1080) |
HD |
720p (1280 x 720) |
SD |
480p (854 x 480) |
LD |
360p (640 x 360) |
VHS |
240p (426 x 240) |
LR |
144p (256 x 144) |
Note
If you use RenderTexture, the video resolution will depend on the resolution of the RenderTexture.
Starting with plugin version 1.7+, you can set custom values for video resolution.
Tip
When using a custom video resolution, the orientation of the device will not matter. You can shoot horizontal video in portrait mode or vice versa.
In graphics terminology, “alpha” is another way of saying “transparency”. Alpha is a continuous value, not something that can be switched on or off.
The lowest alpha value means an image is fully transparent (not visible at all), while the highest alpha value means it is fully opaque (the image is solid and cannot be seen through). Intermediate values make the image partially transparent, allowing you to see both the image and the background behind it simultaneously.
Tip
For best results, make the background of the camera a solid color, black is recommended.
The .webm file format has a specification refinement that allows it to carry alpha information natively when combined with the VP8 video codec. This means any Editor platform can read videos with transparency with this format.
Note
Alpha channel videos are supported for .webm and .mp4 formats.
If you want to record the screen content along with the UI content, the Canvas interface needs to display the camera content.
To do this, do the following:
- Enable the InputUILayer option.
- In the Canvas settings, set the rendering mode to Screen Space - Camera or World Space. This will allow you to add an interface to the content that your main camera is rendering on stage.
If you want to record the screen content without the UI content, remember that the Canvas interface needs to display the content on top of the camera.
To do this, do the following:
- Turn off the InputUILayer option.
- In the Canvas settings in the viewer, select View Mode - Screen Space - Overlay.
Note
If you plan to change the state of the UI layer in real time, use the UniversalVideoRecorder.ConfigInputUILayer parameter and change the Render Mode of the target canvas.
For an example, see the NSR - Screen Recorder_Record UI.
If you want to create a separate audio file when recording a video, you can use the "Separate audio file" function.
To do this, do the following:
- Enable the Record Separate Audio File option.
- Customize the path and name.
Saving the video and audio file is fully automated and has options for custom changes. Below are some examples of how a file is saved, its path, name, and how you can customize it.
Note
You can specify the path and name for the output files when you start recording video using the StartVideoRecorder(var path, var name, …) function.
Automatic saving
Automatic file saving is designed to avoid crashes when the file path and name are empty and to simplify the saving process.The default path for automatic saving is Application.persistentdatapath.
The name of the output file for automatic saving is formed as follows:
video + time + format.
Example: video_yyyy_MM_dd_HH_mm_ss_fff.mp4.
Custom saving
Custom file saving allows you to set any path or name for the output file.
To change the output path of the video file, use the “CustomOutputVideoFilePath” option.
{
private void Run()
{
UniversalVideoRecorder.CustomOutputVideoFilePath = Path.Combine(Application. persistentDataPath, "New folder");
UniversalVideoRecorder.StartVideoRecorder();
}
}
Note
If the output path is empty, an automatic path generation will be applied. (Application.persistentdatapath).
To change the name of the output video file, use the CustomOutputVideoFileName option.
{
private void Run()
{
UniversalVideoRecorder.CustomOutputVideoFileName = "new name";
UniversalVideoRecorder.StartVideoRecorder();
}
}
Note
If the output file name remains the same, automatic name generation will be applied. (video + time + format)
In order to dynamically change cameras during video recording, you need to add the cameras you need to the “Cameras” list in Universal Video Recorder before starting video recording. After creating the list of cameras, the last camera that was turned on will be recorded. To switch cameras, you can simply turn the camera you need on or off.
Note
An example of using dynamic camera changes can be found in the NSR - Screen Recorder_Dynamic Camera Changes scene.
{
private void StartRecording()
{
UniversalVideoRecorder.StartVideoRecorder();
}
private void StopRecording()
{
UniversalVideoRecorder.StopVideoRecorder();
}
}
{
private void PauseRecording()
{
UniversalVideoRecorder.PauseVideoRecorder();
}
private void ResumeRecording()
{
UniversalVideoRecorder.ResumeVideoRecorder();
}
}
Recording sound from a microphone? A parameter that indicates whether the sound from the microphone will be recorded.
Note
For correct microphone recording on iOS devices, you need to enable the following settings in PlayerSettings -> Player -> Other Settings:
Prepare iOS for Recording - Enable this option to initialize the microphone recording APIs. This lowers recording latency, but it also re-routes iPhone audio output via earphones.
Force iOS Speakers when Recording - Enable this option to send the phone’s audio output through the internal speakers, even when headphones are plugged in and recording.
{
private void RecordMicrophone()
{
UniversalVideoRecorder.recordMicrophone = true;
}
}
Note
Remember, if you use microphone and audio source recording at the same time, you may get echo when recording the microphone.
To solve this, you can use headphones.
{
private void RecordAllAudioSources()
{
//Set the Audio Listener.
UniversalVideoRecorder.audioListener = audioListener;
UniversalVideoRecorder.recordAllAudioSources = true;
}
}
{
private void RecordMicrophone()
{
//Set the Audio Source.
UniversalVideoRecorder.targetAudioSource = targetAudioSource;
UniversalVideoRecorder.recordOnlyOneAudioSource = true;
}
}
This is a system that allows you to manually add any amount of audio to the recording. You can add any Audio Source, Audio Listener to use it.
Note
If you are using Audio Receiver Mixer, the basic options Record Microphone, Record All Audio Sources and Record Only One Audio Source will be ignored. You can disable them. You need to add Audio Receiver to your audio source and add it to the list.
Example of use:
For example, we will create a system for recording an audio scene and a custom microphone using new features. Below, we will describe the steps and additional instructions for them.
1. Let's enable the functionality to use the Audio Receiver Mixer.
To do this, go to the Universal Video Recorder -> Audio Settings -> enable the “Audio Receiver Mixer” option.
2. Now we can add any AudioSource to the scene.
We plan to record Audio Listener, so it will be responsible for all audio sources in our scene. Go to or add an Audio Listener component to your scene and add an “Audio Receiver” to it.
3. Now let's create a microphone.
It will have the functionality of turning on when you start recording video and turning off when you finish.
using SilverTau.NSR.Recorders.Video;
using UnityEngine;
using UnityEngine.Audio;
namespace SilverTau.NSR.Samples
{
public class MicrophoneMixerInput : MonoBehaviour
{
// AudioSource to handle microphone input
private AudioSource _micSource;
private void Start()
{
// Initialize the AudioSource
_micSource = gameObject.GetComponent<AudioSource>();
_micSource.mute = true;
UniversalVideoRecorder.Instance.onStartCapture += OnStartCapture;
UniversalVideoRecorder.Instance.onStopCapture += OnStopCapture;
}
private void OnStopCapture()
{
Microphone.End(null);
_micSource.mute = true;
}
private void OnStartCapture()
{
Run();
}
// Captures the microphone input, initializes the AudioSource, and starts playback.
private void Run()
{
_micSource.clip = Microphone.Start(null, true, 10, 44100); // or 48000
_micSource.loop = true; // Enable looping for continuous audio playback
// Wait for the microphone to initialize before playing
while (!(Microphone.GetPosition(null) > 0)) {}
_micSource.Play(); // Play the microphone audio
_micSource.mute = false;
}
}
}
4. Now we need to add Audio Source and Audio Receiver to the new component on the scene with the microphone.
5. After that, we need to go to the newly created Audio Receiver and enable the “IsMute” option.
This feature will mute the audio on the stage but still record it to the audio track. When enabled, the audio buffer will be cleared, resulting in silence on the scene.
6. Now we need to add Audio Receiver components to the list of settings in Universal Video Recorder.
Great! Now you're ready to record with your custom components and audio settings.
{
private async void CurrentVideoOutputPath()
{
var path = await UniversalVideoRecorder.GetVideoOutputPath();
}
}
{
private void GetVideoOutputPath()
{
//Set the Audio Source.
var path = UniversalVideoRecorder.VideoOutputPath;
}
}
Screen Recorder (iOS/iPadOS)
The main component that creates the connection between the Unity engine and the NSR plugin. (Only iOS)
This is a script that allows you to record video using RPScreenRecorder and only for the iOS/iPadOS platform.
Actions
Use this to create some dynamic functionality in your scripts. Unity Actions allow you to dynamically call multiple functions.
recorderStart |
An action that signals the start of screen recording. |
recorderStop |
An action that signals when the screen recording stops. |
recorderShare |
An action that signals that you have shared a recording. |
recorderError |
An action that signals an error when recording a screen. |
recorderShareError |
An action that signals an error when you share a recording. |
onRecorderStatus |
An action that is called when the screen recording status changes. |
Examples of use:
Initialize/Dispose framework:
{
//A method that initializes the framework.
private void Init()
{
ScreenRecorder.Initialize();
}
//A method that disposes the framework.
private void Dispose()
{
ScreenRecorder.Dispose();
}
}
Start/Stop video recording:
{
//A method that starts recording the screen.
private void StartRecording()
{
ScreenRecorder.StartScreenRecorder();
}
//A method that stops recording the screen.
private void StopRecording()
{
ScreenRecorder.StopScreenRecorder();
}
}
Share:
{
//A method that allows you to share.
//"path" > Path to video. If the value is null, the value of the last recorded video is taken.
private void Share(string path)
{
if (string.IsNullOrEmpty(path)) return;
ScreenRecorder.Share(path);
}
}
Save video to Photos:
{
//A method that allows you to save video file to Photos Album.
//"path" > Path to video. If the value is null, the value of the last recorded video is taken.
private void SaveVideoToPhotos(string path)
{
if (string.IsNullOrEmpty(path)) return;
ScreenRecorder.SaveVideoToPhotosAlbum(path);
}
}
Update settings:
{
//A method that helps you update the settings for recording live video.
//"microphone" > Microphone status.
//"popoverPresentation" > State of the popover presentation.
//"saveVideoToPhotosAfterRecord" > Save video to a photo after recording?
private void UpdateSettings(bool microphone, bool popoverPresentation, bool saveVideoToPhotosAfterRecord)
{
ScreenRecorder.UpdateSettings(microphone, popoverPresentation, saveVideoToPhotosAfterRecord);
}
}
Graphic Provider (Cross-platform)
The main system that manages subsystems, parameters, and the way images are created and saved. The root system is the main and unifying one for all subsystems.
Scenes:
NSR - Screen Recorder_Graphic |
An example of a scene that demonstrates how the plugin works with Graphic Provider and the basic settings. |
Examples of use:
Create a Shared Graphic:
{
[SerializeField] private GraphicProvider graphicProvider;
private void CreateImage()
{
graphicProvider.CreateImage();
}
}
Get a Shared Graphic:
{
[SerializeField] private GraphicProvider graphicProvider;
private void GetImage()
{
var sharedGraphic = graphicProvider.GraphicSubsystems.First().sharedGraphic;
}
}
screenshotLayerMasks |
Layers that will be displayed when you take a screenshot. |
filePath |
The part of the original path that will be used to save the image. |
applicationDataPath |
The main path where the images will be stored. |
encodeTo |
Image encoding format. |
HDR |
HDR (high dynamic range) is used for a wide range of colors. Use this option when using HDR colors for materials or post-processing (e.g., Bloom effect). |
imageQuality |
Graphic image quality. Only for .jpg format. |
addDateTimeToGraphicName |
A parameter that allows you to use your own time format for a screenshot. |
dateTimeFormat |
Custom time format for screenshots. |
graphicName |
A parameter that gives a custom name to the image. |
autoClearMemory |
A parameter that allows you to automatically clear the memory after the subsystem creates an image. |
autoSaveSharedGraphic |
A parameter that allows you to automatically save images after they are created by the subsystem. |
deleteImageAfterSave |
A parameter that allows you to automatically delete images after the subsystem creates an image. |
imageSize |
Sets the divider for the output image from the XR camera. 1 is the default value, which represents the original image size. |
changeImageSizeIf WidthMore |
Check the maximum output size of the image by width. If the image is larger than this check, the output image divider "imageSize" will be used. |
changeImageSizeIfHeightMore |
Parameter to check the maximum output image size in height. If the image is larger than this check, the output image divider "imageSize" will be used. |
Get a output path for shared graphics:
public class CustomScript : MonoBehaviour {
private void GraphicProcess(GraphicProvider graphicProvider) {
if(graphicProvider.GraphicSubsystems == null) return;
var graphicSubsystem = graphicProvider.GraphicSubsystems.First();
if (graphicSubsystem == null) {
Debug.Log("Graphic subsystem is missing.");
return;
}
var sharedGraphic = graphicSubsystem.sharedGraphic;
if (sharedGraphic == null) {
Debug.Log("Shared Graphic is missing.");
return;
}
graphicSubsystem.OnSharedGraphicSaved = path =>
{
// Output file path => path
WriteGraphicInfo(sharedGraphic);
};
}
private void WriteGraphicInfo(SharedGraphic sharedGraphic) {
Debug.Log(string.Format("Shared Graphic: {0}", sharedGraphic.id));
Debug.Log(string.Format("Name: {0}", sharedGraphic.name));
Debug.Log(string.Format("Output path: {0}", sharedGraphic.outputPath));
}
}
Graphic Settings
It is a script object that sets the general settings of the system. Settings can be changed both at runtime and in the Unity Editor. Also, the settings script can be changed at any time you need it.
Graphic Subsystem
An abstract class that is the basis for creating a subsystem that provides the original image. With this class, you can create your own subsystems or modify existing subsystems to meet your needs.
Microphone Audio Recorder (Cross-platform)
A component that creates a connection between the Unity Engine and audio recording plugins and provides the ability to create audio files (Android & iOS & macOS & Windows).
This is a module that allows you to record audio and create an audio output file.
Actions
Use this to create some dynamic functionality in your scripts. Unity Actions allow you to dynamically call multiple functions.
onCompleteCapture |
An event indicating that the recording is complete. |
onErrorCapture |
An event that indicates that an error has occurred. |
sampleBufferDelegate |
An event that allows you to get the current sample buffer. |
Properties
Description of the main parameters and options that will help you manage the module.
audioFormats |
The format of the output audio file. |
HeaderSize |
The header size for the audio file. Be careful when changing it. The default value is 44. |
setAutoFrequency |
Automatic frequency detection. |
frequency |
Frequency for the audio file. Be careful when changing it. The default value is 48000. |
setAutoChannels |
Automatic detection of the number of channels. |
channels |
The number of channels for the audio file. Be careful when changing it. The default value is 2. |
computeRMS |
Enable/disable RMS value calculation. |
computeDB |
Enable/disable Decibel value calculation. |
bufferWindowLength |
The size of the buffer window for calculating RMS and Decibels. |
CurrentRMS |
Current RMS value. |
CurrentDB |
Current Decibel value. |
CurrentAudioInputComponent |
The current recording settings of the AudioInputComponent. |
GetAudioSource |
Audio Source to store Microphone Input, An AudioSource Component is required by default |
Examples of use:
Start/Stop audio recording:
{
[SerializeField] private MicrophoneAudioRecorder microphoneAudioRecorder;
private string _currentMicrophoneDevice;
private bool _isRecording;
// The function that starts recording audio.
private void StartRecording()
{
// Get a list of available microphone devices.
var microphoneDevices = microphoneAudioRecorder.GetMicrophoneDevices();
// For recording, we'll use the first microphone in the list.
_currentMicrophoneDevice = _microphoneDevices.Length > 0 ? _microphoneDevices[0] : null;
// Start the audio recording process.
microphoneAudioRecorder.StartRecording(_currentMicrophoneDevice);
_isRecording = true;
}
// The function that stops recording audio.
private void StopRecording()
{
if (!_isRecording) { return; }
// Create and verify the output path for the audio file.
var audioFilePath = Path.Combine(Application.persistentDataPath, "NSR - Screen Recorder", "Audio");
if (!Directory.Exists(audioFilePath))
{
Directory.CreateDirectory(audioFilePath);
}
// Create a name for the audio file (you don't need to enter a file extension).
var audioFileName = DateTime.UtcNow.ToString("yyyy_MM_dd_HH_mm_ss_fff");
// Stop the process of creating an audio file.
// The output audio file will be automatically saved to your path.
microphoneAudioRecorder.StopRecording(audioFilePath, audioFileName);
_isRecording = false;
}
}
Device permissions
Starting with the 1.6.0 version of the plugin, the function of limiting the camera and microphone usage permissions has been added. Now you can use the plugin without requiring permission to use the microphone and camera, for example, if you only want to record video.
In order to remove the forced permissions for your application, you need to add the symbols you need to Scripting Define Symbols in Player Settings -> Other Settings.
Note
Scripting Define Symbols:
NSR_MICROPHONE_DISABLE - allows you to completely disable the device's microphone (for the plugin).
NSR_CAMERA_DISABLE - allows you to completely disable the device's camera (for the plugin).
Watermark
Starting with the 1.7.2 version of the plugin, the ability to process frames before sending them to the recorder has been added. The NSR - Watermark component is responsible for applying a watermark to each frame of the video recorded with UniversalVideoRecorder.
This component allows you to customize the appearance of the watermark using a shader, texture, size, position, and transparency. It also supports automatic scaling of the watermark to fit the aspect ratio of the texture.
To add a basic module to a scene, you need to do the following:
- In the scene hierarchy, add a component from the quick menu (or manually).
- In the Component inspector, add the basic elements.
Note
If you encounter black frames when recording video on Android, please check your Graphics API. We recommend setting the Auto Graphics API option.
1. Initialization:
- When the component is enabled (OnEnable), it creates an instance of WatermarkRenderer with the provided shader and parameters.
- It subscribes to the onFrameRender event of the UniversalVideoRecorder.
2. Watermark Rendering:
- On every video frame, OnFrameRender is triggered, applying the watermark texture on top of the video frame.
- If adjustSizeToAspect is enabled, the watermark size is automatically adjusted to preserve the texture's aspect ratio (only done on the first frame).
3. Live Updates:
You can update the watermark’s appearance at runtime using the following methods:
- ChangePosition(Vector2) – updates position (normalized)
- ChangeSize(Vector2) – updates size (normalized)
- ChangeOpacity(float) – updates transparency (0–1)
- ChangeWatermarkTexture(Texture) – changes the texture
- ChangeShader(Shader) – applies a new shader
- UpdateSettingsRealtime() – re-applies all current settings
4. Cleanup:
- When the component is disabled (OnDisable), it unsubscribes from the render event and disposes of the watermark renderer.
Custom Frame Processors
The NSR - Watermark component demonstrates how to inject custom logic into the video rendering pipeline using the UniversalVideoRecorder.onFrameRender event.
This opens the door to building your own custom frame processors - for adding filters, effects, overlays, analysis, or any real-time modification to the captured video frames.
To implement your own custom frame logic:
- Subscribe to the onFrameRender event of the UniversalVideoRecorder.
- Process the frame texture however you need (e.g., apply a shader, copy to another texture, analyze pixel data, etc.).
- Optionally render your result back to the frame or save it elsewhere.
Below is a simple example of a component that applies a custom grayscale shader to each video frame:
using UnityEngine;
using SilverTau.NSR.Recorders.Video;
public class CustomGrayscaleFrameProcessor : MonoBehaviour
{
[SerializeField] private UniversalVideoRecorder recorder;
[SerializeField] private Shader grayscaleShader;
private Material _material;
private void OnEnable()
{
if (recorder == null || grayscaleShader == null) return;
_material = new Material(grayscaleShader);
recorder.onFrameRender += OnFrameRender;
}
private void OnDisable()
{
if (recorder == null) return;
recorder.onFrameRender -= OnFrameRender;
Destroy(_material);
}
private void OnFrameRender(Texture frame)
{
RenderTexture temp = RenderTexture.GetTemporary(frame.width, frame.height);
Graphics.Blit(frame, temp, _material); // Apply grayscale shader
Graphics.Blit(temp, frame); // Overwrite original frame
RenderTexture.ReleaseTemporary(temp);
}
}
Use Cases for Custom Frame Logic
- Apply visual effects (grayscale, blur, sepia, distortion)
- Add dynamic overlays (text, UI, data feeds)
- Perform real-time computer vision or image analysis
- Stream frames to an external server
Tip
Make sure your custom frame logic is efficient. It runs every frame and can easily bottleneck performance or cause dropped frames if:
- You allocate too many temporary textures
- Your shader is complex
- You perform CPU-GPU readbacks
Use RenderTexture.GetTemporary() and Graphics.Blit() wisely, and dispose of everything when done.
Share
Your app can share any files and folders. Share utility is a simple way to share content from one app to another, even from different developers.
Functions:
ShareItem |
A function that allows you to share an item. |
public class CustomShare : MonoBehaviour
{
// A function that allows you to share a file & folder & image & video and e.t.c.
public void ShareFile(string path)
{
Share.ShareItem(path);
}
}
public class CustomShare : MonoBehaviour
{
public void ShareTexture(Texture2D texture) {
if (texture == null) return;
texture.Share(TextureEncodeTo.PNG);
// or
if (ShareExtension.TryShareItem(texture, TextureEncodeTo.PNG)) {
Debug.Log("Done!");
}
}
}
public class CustomShare : MonoBehaviour
{
public void ShareBytes(byte[] data) {
if(data == null) return;
data.Share(".fileFormat");
// or
if (ShareExtension.TryShareItem(data, ".fileFormat")) {
Debug.Log("Done!");
}
}
}
Gallery
Your app can save any videos and pictures to your device's gallery. The Gallery utility is a simple and convenient way to save content to your target device.
Functions:
SaveVideoToGallery |
A method that allows you to save a video file to Gallery (Photos Album). |
SaveImageToGallery |
A method that allows you to save an image file to Gallery (Photos Album). |
public class CustomGallery : MonoBehaviour
{
// A method that allows you to save a video file to Gallery (Photos Album).
public void SaveVideoFile(string path, string androidFolderPath = "NSR_Video")
{
Gallery.SaveVideoToGallery(path, androidFolderPath);
}
// A method that allows you to save an image file to Gallery (Photos Album).
public void SaveImageFile(string path, string androidFolderPath = "NSR_Video")
{
Gallery.SaveImageToGallery(path, androidFolderPath);
}
}
public class CustomShare : MonoBehaviour
{
public void SaveToGalleryTexture(Texture2D texture, string androidFolderPath = "NSR_Video")
{
if(texture == null) return;
texture.SaveImageToGallery(TextureEncodeTo.PNG, androidFolderPath);
// or
if (GalleryExtension.TrySaveImageToGallery(texture, TextureEncodeTo.PNG, androidFolderPath))
{
Debug.Log("Done!");
}
}
}
public class CustomShare : MonoBehaviour
{
public void SaveToGalleryBytes(byte[] data, string androidFolderPath = "NSR_Video")
{
if(data == null) return;
data.SaveVideoToGallery(".mp4", androidFolderPath);
// or
if (GalleryExtension.TrySaveVideoToGallery(data, ".mp4", androidFolderPath))
{
Debug.Log("Done!");
}
}
}
File Manager
An abstract class that is the basis for creating a subsystem that provides the original image. With this class, you can create your own subsystems or modify existing subsystems to meet your needs.
A file manager is a prefab that provides a user interface for managing files and folders. The most common operations performed on files or groups of files include opening (e.g., viewing, playing), deleting, and searching for files. Folders and files can be displayed in a hierarchical tree based on their directory structure.
Properties:
storageType |
The target storage type determines the location from which files will be parsed. |
customStorageType |
Custom path for file analysis and storage type. The parameter will be effective if the Storage type is set to Custom. |
fileExplorer |
File Explorer is a script that performs file recognition and search functions for the selected storage type. |