Jetson Multimedia API incorrect framerate error

Hello NVIDIA forum,
I’m experiencing an issue while working with the Jetson Multimedia API. Every time I try to use it, an error is thrown, even though the application itself runs without any visible problems.
First, here is the configuration I’m are currently using:

*Device: Orin AGX 64GB Developer Kit

  • OS: JetPack 6.0 Rev 2
  • Applications Installed:
    • All applications available via SDK Manager, except the Jetson Service Platform
    • OpenCV 4.9.0 built from source with CUDA and opencv-contrib (we removed the preinstalled version)
    • Jetson Multimedia API built on the same board (available under /usr/local)
    • Qt 6.5.3 built from source
    • Additional libraries that are built but not system-installed

We have an application that receives frames from a camera in OpenCV format. These frames must be saved in a compressed format—lossless if needed—to ensure data reproducibility.

Here are the steps followed in our application:

  1. Encoder and planes are initialised
  2. A thread is spawned to dequeue from the capture plane
  3. The encoder is warmed up by queueing 30 frames
  4. Initialisation completes
  5. The main thread retrieves a frame
  6. The frame is converted to NV24 format and fed to the encoder
  7. The NV24 frame is placed in a V4L2 buffer and queued in the output plane
  8. If available, the next frame is queued in the capture plane

The encoder is initialised with the following parameters:

#define BITS_PER_PIXEL 24
#define ENCODER_QUEUE_SIZE 15
int width = 1280;
int height = 800;
int framerate = 30;
int bitrate = (width * height * framerate * BITS_PER_PIXEL) / 1000;
std::string label = "CamTest";
nvEnc->setCapturePlaneFormat(V4L2_PIX_FMT_H265, width, height, 2 * 1024 * 1024);
nvEnc->setOutputPlaneFormat(V4L2_PIX_FMT_NV24M, width, height);
nvEnc->setInsertSpsPpsAtIdrEnabled(true);
nvEnc->setChromaFactorIDC(3);
nvEnc->setRateControlMode(V4L2_MPEG_VIDEO_BITRATE_MODE_VBR);
nvEnc->setHWPresetType(V4L2_ENC_HW_PRESET_ULTRAFAST);
nvEnc->setMaxPerfMode(1);
nvEnc->setBitrate(bitrate);
nvEnc->setPeakBitrate(bitrate + 500000);
nvEnc->setProfile(V4L2_MPEG_VIDEO_H265_PROFILE_MAIN);
nvEnc->setLossless(false);
nvEnc->setIDRInterval(intervalIDframe);
nvEnc->setIFrameInterval(intervalIDframe);
nvEnc->setFrameRate(framerate, 1);
nvEnc->output_plane.setupPlane(V4L2_MEMORY_MMAP, ENCODER_QUEUE_SIZE, true, false);
nvEnc->capture_plane.setupPlane(V4L2_MEMORY_MMAP, ENCODER_QUEUE_SIZE, true, false);
nvEnc->subscribeEvent(V4L2_EVENT_EOS, 0, 0);

All the initialisation steps completed successfully (return code = 0). The application correctly saves all H.265 frames, which can be opened with VLC without any issues.

However, each time I enqueue a frame, I get the following warning:

NVENC_H265: Unsupported frameRate (Supported: 1.0 - 60.0), setting to deault value 30.00

The source code will be available in the coming days, as I am currently cleaning it up to isolate only the NVIDIA-related components.

But before that, what can I check to ensure the encoder receives the framerate correctly? Or what could be the culprit of the error message?

Thanks

*** Please note that this reply is generated by LLM automatically ***

Based on the provided context, I’ll attempt to provide a comprehensive answer to the question.

The issue you’re experiencing with the Jetson Multimedia API on the Jetson AGX Orin 64GB Developer Kit, running JetPack 6.0 Rev 2, seems to be related to the encoder receiving frames at the correct framerate. Despite the application running without visible problems and saving H.265 frames correctly, you’re getting a warning message each time you enqueue a frame.

To troubleshoot this issue, here are some potential areas to investigate:

  1. Encoder initialization: Double-check the encoder initialization parameters to ensure they match the expected framerate. Verify that the return code of 0 indicates successful initialization.
  2. Frame conversion and queuing: Confirm that the frame conversion to NV24 format and queuing in the output plane are done correctly. Ensure that the V4L2 buffer is properly allocated and configured.
  3. Capture plane and dequeue: Verify that the capture plane is properly configured and that the dequeuing process is not blocking or causing issues.
  4. Framerate configuration: Check the framerate configuration in the application and ensure it matches the expected framerate. You can use tools like jetson_clocks to max out the SOC clocks and speed up execution, but be aware that this is a workaround and not a guaranteed solution.
  5. Multimedia API version: Ensure that you’re using the latest version of the Jetson Multimedia API, as older versions might have issues with the RTP stack, potentially causing intermittent segmentation faults (as mentioned in the context).
  6. GStreamer framework: Since you’re using the Jetson Multimedia API, which is built on top of the GStreamer framework, check if there are any known issues with the GStreamer version you’re using. The context mentions that the latest version of GStreamer may have issues with the RTP stack.
  7. System configuration: Verify that the system configuration, including the OS and installed applications, is not causing any conflicts or issues with the Jetson Multimedia API.

Comparing your setup to the context, I notice that:

  • You’re using JetPack 6.0 Rev 2, which is a relatively recent version.
  • You’ve built OpenCV from source with CUDA and opencv_contrib, which might be relevant if there are any issues with the preinstalled version.
  • You’re using Qt 6.5.3, which is a relatively recent version.

To further assist you, it would be helpful to have more information about the warning message you’re receiving and the source code of your application. Once you’ve cleaned up the code and isolated the NVIDIA-related components, sharing it might help identify the root cause of the issue.

In summary, to ensure the encoder receives the framerate correctly, you should verify the encoder initialization parameters, frame conversion and queuing, capture plane and dequeue, framerate configuration, Multimedia API version, GStreamer framework, and system configuration. If you’re still experiencing issues, sharing more information about the warning message and your application’s source code might help identify the root cause.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Hi,
Please try 01_video_encode sample and see if you observe the issue. The samples are installed to

/usr/src/jetson_multimedia_api

Not sure why your samples are under /usr/local/. The path looks not correct.

1 Like

Hi DaneLLL,

thanks for the reply. Regarding the path of the API, is correct the one you said, I wrote the path by memory.
To use the Jetson multimedia API, I always copy the folder /usr/src/jetson_multimedia_api on a folder in /home/user/src, so I can build the libs without use sudo and to use the libs locally.

I tried the video encode sample and that was the command launched and the output:

./video_encode /home/user/my_video-1.avi 1280 800 H265 my_video-11.265
Creating Encoder in blocking mode
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 8
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
892744264
842091865
NvVideo: H265 : Profile : 1
NVMEDIA: Need to set EMC bandwidth : 846000
NvVideo: bBlitMode is set to TRUE
Could not read complete frame from input file
File read complete.
Got 0 size buffer in capture
App run was successful

But when I try to open the file with a viewer, the output is this:

When I use the application created by me the frame output the image is correct and viewable from VLC and the Videos app in ubuntu

Hi,
Input file has to be YUV data. my_video-1.avi seems to be a AVI file.

Hi,

My mistake — I tested using an online YUV video, and the example works as expected without any errors.
However, when I run my own example, the application initialises the encoder correctly, with all return values equal to zero.
The error message appears when I attempt to encode each frame.
I’ve stripped out all unrelated code from my project and isolated the encoder logic. I’m using a standard webcam for input (I’m aware there are some memory leaks — this version is purely for debugging NVEnc-related issues):

#include <atomic>
#include <cstdio>
#include <fstream>
#include <iostream>
#include <string>

#include <opencv2/opencv.hpp>

#include "NvBufSurface.h"
#include "NvVideoDecoder.h"
#include "NvVideoEncoder.h"



#define MAX_PLANES 3
#define BITS_PER_PIXEL 24  // For the YUV444 or NV24

#define CHUNK_SIZE 4000000

#define ENCODER_QUEUE_SIZE 15
#define RINGBUFFER_SIZE (ENCODER_QUEUE_SIZE * 2)

static thread_local bool hasEncodedFrame = false;

constexpr size_t MAX_ENCODER_THREAD_QUEUE_SIZE = 32;
constexpr size_t MAX_DECODER_THREAD_QUEUE_SIZE = 32;


struct encoded_frame_t {
	const void* data;
	size_t size;
	uint32_t id;
	uint32_t flags;
};

struct encoder_t {
	uint16_t encWidth        = 848;
	uint16_t encHeight       = 480;
	uint16_t framerate       = 30;
	uint32_t bitrate         = (encWidth * encHeight * framerate * BITS_PER_PIXEL) / 1000;
	uint16_t intervalIDFrame = 30;
	bool lossless            = false;

	NvVideoEncoder* nvEnc   = 0;
	int framesInOutputQueue = 0;

	pthread_t pollingThread;
	std::atomic<bool> runThread = true;
	// captured frame, this is the output of the encoder
	bool hasCompressedFrameToEnqueue;

	// is the struct with the v4l2 buffer with the compressed frame
	struct __attribute__((aligned(64))) queue_entry {
		struct v4l2_buffer v4l2_buf;
		struct v4l2_plane planes[MAX_PLANES];
		NvBuffer* buffer = NULL;
	};

	queue_entry compressed_queue[MAX_ENCODER_THREAD_QUEUE_SIZE];

	std::atomic<size_t> writeHead;
	std::atomic<size_t> readHead;

	uint32_t appendSequenceID = 0;
	uint32_t readSequenceID   = 0;
};

void rgbToNv24(uint8_t* srcImg, uint8_t* srcY, uint8_t* srcUV, int width, int height) {
	const unsigned char* src = (const unsigned char*)srcImg;
	unsigned char* dstY      = (unsigned char*)srcY;
	unsigned char* dstUV     = (unsigned char*)srcUV;

	// fill the channels
	for (size_t y = 0; y < height; ++y) {
		for (size_t x = 0; x < width; ++x) {
			unsigned char r = *src++;
			unsigned char g = *src++;
			unsigned char b = *src++;

			*dstY = (unsigned char)(((66 * r + 129 * g + 25 * b) >> 8) + 16);
			dstY++;
			*dstUV =
				(unsigned char)(((-38 * r + -74 * g + 112 * b) >> 8) + 128);
			dstUV++;
			*dstUV =
				(unsigned char)(((112 * r + -94 * g + -18 * b) >> 8) + 128);
			dstUV++;
		}
	}
}

void* encoderPollingThread(void* ctx) {
	encoder_t* encoderTest = (encoder_t*)ctx;

	NvVideoEncoder* nvEnc = encoderTest->nvEnc;

	while (encoderTest->runThread.load(std::memory_order_acquire)) {
		nvEnc->SetPollInterrupt();
		// it dows the module so when is at 32 (or multiple) it restart to zero the compressed_queue
		size_t idx = encoderTest->writeHead % MAX_ENCODER_THREAD_QUEUE_SIZE;

		encoder_t::queue_entry* entry = encoderTest->compressed_queue + idx;  // goes to the next part of memory of the compressed_queue array

		memset(&entry->v4l2_buf, 0, sizeof(entry->v4l2_buf));
		memset(entry->planes, 0, sizeof(entry->planes));
		entry->v4l2_buf.m.planes = entry->planes;
		entry->v4l2_buf.length   = 1;

		int ret = nvEnc->capture_plane.dqBuffer(
			entry->v4l2_buf, &entry->buffer, 0, -1);


		if (ret < 0) continue;

		encoderTest->writeHead++;
	}

	return 0;
}


int main() {
	printf("hello from %s!\n", "NVENC Test");

	int width          = 1280;
	int height         = 720;
	int framerate      = 10;
	std::string label  = "CamTest";
	bool pipeStarted   = false;


	std::cout << "Application Configuration:" << std::endl;
	std::cout << "  Resolution : " << width << "x" << height << std::endl;
	std::cout << "  Framerate  : " << framerate << " FPS" << std::endl;
	std::cout << "  Label      : " << label << std::endl;


	cv::VideoCapture cap(0, cv::CAP_V4L2);
	if (!cap.isOpened()) {
		std::cerr << "Error: unable to open camera" << std::endl;
		return -1;
	}

	cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);
	cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);
	cap.set(cv::CAP_PROP_FPS, 10);
	double camWidth  = cap.get(cv::CAP_PROP_FRAME_WIDTH);
	double camHeight = cap.get(cv::CAP_PROP_FRAME_HEIGHT);
	double camFPS    = cap.get(cv::CAP_PROP_FPS);

	std::cout << "Started webcam with config " << camWidth << "x" << camHeight << " @ " << camFPS << " FPS" << std::endl;

	std::cout << "DONE!" << std::endl;

	cv::Mat colorImage = cv::Mat::zeros(cv::Size(width, height), CV_8UC3);
	cv::Mat depthImage = cv::Mat::zeros(cv::Size(width, height), CV_16UC1);


	// ***************************** Nvidia *****************************
	printf(" %s: Creating Encoder\n", label.c_str());
	int intervalIDframe         = framerate;
	std::atomic<bool> runThread = true;
	int NvPipelineWarmup        = intervalIDframe;
	int ret                     = 0;
	int appendSequenceID        = 0;

	encoder_t* encoderTest       = new encoder_t;
	encoderTest->encWidth        = width;
	encoderTest->encHeight       = height;
	encoderTest->framerate       = framerate;
	encoderTest->intervalIDFrame = intervalIDframe;
	encoderTest->bitrate         = (encoderTest->encWidth * encoderTest->encHeight * encoderTest->framerate * BITS_PER_PIXEL) / 1000;

	NvVideoEncoder* nvEnc = nullptr;
	std::ofstream outFile("output.h265", std::ios::binary);
	pthread_t pollingThread;

	uint8_t* srcImg = (uint8_t*)calloc(encoderTest->encWidth, encoderTest->encHeight * 3);
	uint8_t* srcY   = (uint8_t*)calloc(encoderTest->encWidth, encoderTest->encHeight);
	uint8_t* srcUV  = (uint8_t*)calloc(encoderTest->encWidth, encoderTest->encHeight * 2);

	std::string encName = "enc0";
	nvEnc               = NvVideoEncoder::createVideoEncoder(encName.c_str());

	nvEnc->enableProfiling();

	ret = nvEnc->setCapturePlaneFormat(V4L2_PIX_FMT_H265, encoderTest->encWidth, encoderTest->encHeight, 2 * 1024 * 1024);
	if (ret != 0) printf("Error setting capture plane\n");
	ret = nvEnc->setOutputPlaneFormat(V4L2_PIX_FMT_NV24M, encoderTest->encWidth, encoderTest->encHeight);
	if (ret != 0) printf("Error setting output plane\n");
	ret = nvEnc->setChromaFactorIDC(3);
	if (ret != 0) printf("Error setting the H.265 encoder Chroma Format IDC\n");
	ret = nvEnc->setFrameRate(30, 1);
	if (ret != 0) printf("Error setting framerate, this is a fraction, thats because is written as 0 / 1\n");
	ret = nvEnc->setProfile(V4L2_MPEG_VIDEO_H265_PROFILE_MAIN);
	if (ret != 0) printf("Error setting profile\n");
	ret = nvEnc->setInsertSpsPpsAtIdrEnabled(true);
	if (ret != 0) printf("Error setting SPS PPS at every IDR frame flag\n");
	ret = nvEnc->setRateControlMode(V4L2_MPEG_VIDEO_BITRATE_MODE_VBR);
	if (ret != 0) printf("Error setting Rate Control Mode to V4L2_MPEG_VIDEO_BITRATE_MODE_VBR\n");
	ret = nvEnc->setHWPresetType(V4L2_ENC_HW_PRESET_ULTRAFAST);
	if (ret != 0) printf("Error setting HW Preset Type to V4L2_ENC_HW_PRESET_ULTRAFAST\n");
	ret = nvEnc->setMaxPerfMode(1);
	if (ret != 0) printf("Error setting MaxPerfMode\n");
	ret = nvEnc->setBitrate(encoderTest->bitrate);
	if (ret != 0) printf("Error setting bitrate\n");
	ret = nvEnc->setPeakBitrate(encoderTest->bitrate + 500000);
	if (ret != 0) printf("Error setting peak bitrate\n");
	ret = nvEnc->setLossless(false);
	if (ret != 0) printf("Error setting lossless\n");
	ret = nvEnc->setIDRInterval(encoderTest->intervalIDFrame);
	if (ret != 0) printf("Error setting IDR interval\n");
	ret = nvEnc->setIFrameInterval(encoderTest->intervalIDFrame);
	if (ret != 0) printf("Error setting I Frame interval\n");

	ret = nvEnc->output_plane.setupPlane(V4L2_MEMORY_MMAP, ENCODER_QUEUE_SIZE, true, false);
	if (ret != 0) printf("Error setting setup output plane\n");
	ret = nvEnc->capture_plane.setupPlane(V4L2_MEMORY_MMAP, ENCODER_QUEUE_SIZE, true, false);
	if (ret != 0) printf("Error setting setup capture plane\n");
	ret = nvEnc->subscribeEvent(V4L2_EVENT_EOS, 0, 0);
	if (ret != 0) printf("Error setting subscribe to end of stream events\n");
	printf(" %s: Starting Encoder\n", label.c_str());

	std::cout << "Starting NVENC Stream" << std::endl;
	ret = nvEnc->output_plane.setStreamStatus(true);
	if (ret != 0) printf("start the output stream");
	ret = nvEnc->capture_plane.setStreamStatus(true);
	if (ret != 0) printf("start the capture stream");

	if (nvEnc->output_plane.getNumBuffers() == 0 || nvEnc->capture_plane.getNumBuffers() == 0) {
		printf("Encoder not initialized correctly: check formats and order.\n");
		return -1;
	}

	assert(nvEnc->capture_plane.getNumBuffers() <= ENCODER_QUEUE_SIZE);

	for (uint32_t i = 0; ret == 0 && i < nvEnc->capture_plane.getNumBuffers(); i++) {
		struct v4l2_buffer v4l2_buf;
		struct v4l2_plane planes[MAX_PLANES];
		memset(&v4l2_buf, 0, sizeof(v4l2_buf));
		memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

		v4l2_buf.index    = i;
		v4l2_buf.m.planes = planes;
		v4l2_buf.type     = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
		v4l2_buf.memory   = V4L2_MEMORY_MMAP;
		v4l2_buf.length   = nvEnc->capture_plane.getNumPlanes();

		ret = nvEnc->capture_plane.qBuffer(v4l2_buf, NULL);
		if (ret != 0) printf("enqueue the capture buffer");
	}
	if (ret == 0) {
		encoderTest->nvEnc = nvEnc;
		printf("Creating %s polling thread\n", label.c_str());
		encoderTest->runThread = true;
		pthread_create(&encoderTest->pollingThread, NULL, encoderPollingThread, encoderTest);
	}
	else {
		delete nvEnc;
		nvEnc = 0;
		return 1;
	}


	printf(" %s: Starting Loop\n", label.c_str());

	for (int i = 0; i < 300; i++) {
		cap >> colorImage;
		if (colorImage.empty()) continue;

		srcImg = (uint8_t*)colorImage.data;
		rgbToNv24(srcImg, srcY, srcUV, width, height);
		struct v4l2_buffer v4l2_buf;
		struct v4l2_plane planes[MAX_PLANES];
		// struct v4l2_meta_format metadata_fmt;

		memset(&v4l2_buf, 0, sizeof(v4l2_buf));
		memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));
		v4l2_buf.m.planes = planes;
		v4l2_buf.flags |= V4L2_BUF_FLAG_TIMESTAMP_COPY;
		v4l2_buf.timestamp.tv_sec  = 0;
		v4l2_buf.timestamp.tv_usec = encoderTest->appendSequenceID++;
		NvBuffer* buffer           = 0;
		bool good                  = true;
		// if we still have to allocate new frames to send to the queue
		// initialize them.
		bool newFrame = encoderTest->framesInOutputQueue < ENCODER_QUEUE_SIZE;
		if (newFrame) {
			buffer =
				encoderTest->nvEnc->output_plane.getNthBuffer(encoderTest->framesInOutputQueue);
			v4l2_buf.index = encoderTest->framesInOutputQueue;
			encoderTest->framesInOutputQueue++;
		}
		else {
			ret = encoderTest->nvEnc->output_plane.dqBuffer(v4l2_buf, &buffer, NULL, 0);
			if (ret != 0) printf("Error dequeuing output plane buffer\n");
		}

		size_t dst_y_stride  = buffer->planes[0].fmt.stride;
		size_t dst_uv_stride = buffer->planes[1].fmt.stride;

		buffer->planes[0].bytesused = encoderTest->encHeight * dst_y_stride;
		buffer->planes[1].bytesused = encoderTest->encHeight * dst_uv_stride;
		planes[0].bytesused         = buffer->planes[0].bytesused;
		planes[1].bytesused         = buffer->planes[1].bytesused;

		const uint8_t* src_y  = (const uint8_t*)srcY;
		const uint8_t* src_uv = (const uint8_t*)srcUV;

		uint8_t* dst_y  = (uint8_t*)buffer->planes[0].data;
		uint8_t* dst_uv = (uint8_t*)buffer->planes[1].data;

		for (size_t i = 0; i < encoderTest->encHeight; ++i) {
			memcpy(dst_y, src_y, encoderTest->encWidth);
			memcpy(dst_uv, src_uv, 2 * encoderTest->encWidth);

			src_y += encoderTest->encWidth;
			src_uv += 2 * encoderTest->encWidth;
			dst_y += dst_y_stride;
			dst_uv += dst_uv_stride;
		}
		// ensure the memory consistency with the hardware accelerator
		for (uint32_t j = 0; good && j < buffer->n_planes; j++) {
			NvBufSurface* nvbuf_surf = 0;
			ret                      = NvBufSurfaceFromFd(buffer->planes[j].fd, (void**)(&nvbuf_surf));
			if (ret != 0) printf("Error creating NvBufSurface for plane %d\n", j);
			ret = NvBufSurfaceSyncForDevice(nvbuf_surf, 0, j);
			if (ret != 0) printf("Error syncing NvBufSurface for plane %d\n", j);
		}
		// write the frame to the encoder
		ret = encoderTest->nvEnc->output_plane.qBuffer(v4l2_buf, NULL);


		encoded_frame_t* frame = new encoded_frame_t;
		frame->data            = 0;
		frame->size            = 0;
		frame->id              = 0;
		if (encoderTest->hasCompressedFrameToEnqueue) {
			NvVideoEncoder* nvEnc         = encoderTest->nvEnc;
			size_t idx                    = encoderTest->readHead % MAX_ENCODER_THREAD_QUEUE_SIZE;
			encoder_t::queue_entry* entry = encoderTest->compressed_queue + idx;
			int ret                       = nvEnc->capture_plane.qBuffer(entry->v4l2_buf, 0);
			if (ret >= 0) {
				encoderTest->readHead++;
				encoderTest->hasCompressedFrameToEnqueue = false;
			}
			else {
				return false;
			}
		}
		if (encoderTest->readHead == encoderTest->writeHead) continue;

		size_t idx                    = encoderTest->readHead % MAX_ENCODER_THREAD_QUEUE_SIZE;
		encoder_t::queue_entry* entry = encoderTest->compressed_queue + idx;

		frame->data                              = entry->buffer->planes[0].data;
		frame->size                              = entry->buffer->planes[0].bytesused;
		frame->flags                             = entry->v4l2_buf.flags;
		encoderTest->hasCompressedFrameToEnqueue = true;

		frame->id = entry->v4l2_buf.timestamp.tv_usec;
		assert(frame->id == encoderTest->readSequenceID);
		encoderTest->readSequenceID++;
		printf("ID: %d\n", frame->id);
		// END CYCLE
	}
	encoderTest->nvEnc->abort();

	encoderTest->nvEnc->output_plane.setStreamStatus(false);
	encoderTest->nvEnc->capture_plane.setStreamStatus(false);


	encoderTest->nvEnc->~NvVideoEncoder();

	return 0;
}

I also tested with other cameras and resolutions/fps but the result is always the same

Hi,
Would suggest you refer to the samples to debug your application:

/usr/src/jetson_multimedia_api/samples/01_video_encode/
/usr/src/jetson_multimedia_api/samples/unittest_samples/decoder_unit_sample/

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.