Destruction of Native Objects with Static Storage Duration Destruction of Native Objects with Static Storage Duration windows windows

Destruction of Native Objects with Static Storage Duration


Getting the easy questions out of the way first:

A good resource for CLR customization is Steven Pratschner's book "Customizing the Microsoft .NET Framework Common Language Runtime". Beware that it is outdated, the hosting interfaces have changed in .NET 4.0. MSDN doesn't say much about it but the hosting interfaces are well documented.

You can make debugging simpler by changing a debugger setting, change the Type from "Auto" to "Managed" or "Mixed".

Do note that your 3000 msec sleep is just on the edge, you should test with 5000 msec. If the C++ class appears in code that's compiled with /clr in effect, even with #pragma unmanaged in effect then you'll need to override the finalizer thread timeout. Tested on the .NET 3.5 SP1 CLR version, the following code worked well to give the destructor sufficient time to run to completion:

ICLRControl* pControl;if (FAILED(hr = pRuntimeHost->GetCLRControl(&pControl))) {    goto out;}ICLRPolicyManager* pPolicy;if (FAILED(hr = pControl->GetCLRManager(__uuidof(ICLRPolicyManager), (void**)&pPolicy))) {    goto out;}hr = pPolicy->SetTimeout(OPR_FinalizerRun, 60000);pPolicy->Release();pControl->Release();

I picked a minute as a reasonable time, tweak as necessary. Note that the MSDN documentation has a bug, it doesn't show OPR_FinalizerRun as a permitted value but it does in fact work properly. Setting the finalizer thread timeout also ensures that a managed finalizer won't time out when it indirectly destructs an unmanaged C++ class, a very common scenario.

One thing you'll see when you run this code with CLRHost compiled with /clr is that the call to GetCLRManager() will fail with an HOST_E_INVALIDOPERATION return code. The default CLR host that got loaded to execute your CLRHost.exe won't let you override the policy. So you are pretty stuck with having a dedicated EXE to host the CLR.

When I tested this by having CLRHost load a mixed-mode assembly, the call stack looked like this when setting a breakpoint on the destructor:

CLRClient.dll!Global::~Global()  Line 24    C++[Managed to Native Transition]  CLRClient.dll!<Module>.?A0x789967ab.??__Fg@@YMXXZ() + 0x1b bytes    CLRClient.dll!_exit_callback() Line 449 C++CLRClient.dll!<CrtImplementationDetails>::LanguageSupport::_UninitializeDefaultDomain(void* cookie = <undefined value>) Line 753    C++CLRClient.dll!<CrtImplementationDetails>::LanguageSupport::UninitializeDefaultDomain() Line 775 + 0x8 bytes C++CLRClient.dll!<CrtImplementationDetails>::LanguageSupport::DomainUnload(System::Object^ source = 0x027e1274, System::EventArgs^ arguments = <undefined value>) Line 808 C++msvcm90d.dll!<CrtImplementationDetails>.ModuleUninitializer.SingletonDomainUnload(object source = {System.AppDomain}, System.EventArgs arguments = null) + 0xa1 bytes    // Rest omitted

Do note that this is unlike your observations in your question. The code is triggered by the managed version of the CRT (msvcm90.dll). And this code runs on a dedicated thread, started by the CLR to unload an appdomain. You can see the source code for this in the vc/crt/src/mstartup.cpp source code file.


The second scenario occurs when the C++ class is part of a source code file that is compiled without /clr in effect and got linked into the mixed-mode assembly. The compiler then uses the normal atexit() handler to call the destructor, just like it normally does in an unmanaged executable. In this case when the DLL gets unloaded by Windows at program termination and the managed version of the CRT shuts down.

Notable is that this happens after the CLR is shutdown and that the destructor runs on the program's startup thread. Accordingly, the CLR timeouts are out of the picture and the destructor can take as long as it wants. The essence of the stack trace is now:

CLRClient.dll!Global::~Global()  Line 12    C++CLRClient.dll!`dynamic atexit destructor for 'g''()  + 0xd bytes    C++    // Confusingly named functions elided    //...CLRHost.exe!__crtExitProcess(int status=0x00000000)  Line 732   CCLRHost.exe!doexit(int code=0x00000000, int quick=0x00000000, int retcaller=0x00000000)  Line 644 + 0x9 bytes   CCLRHost.exe!exit(int code=0x00000000)  Line 412 + 0xd bytes C    // etc..

This is however a corner case that will only occur when the startup EXE is unmanaged. As soon as the EXE is managed, it will run destructors on AppDomain.Unload, even if they appear in code that was compiled without /clr. So you still have the timeout problem. Having an unmanaged EXE is not very unusual, this will happen for example when you load [ComVisible] managed code. But that doesn't sound like your scenario, you are stuck with CLRHost.


To answer the "Where is this documented / how can I educate myself more on the topic?" question: you can understand how this works (or used to work at least for the framework 2) if you download and check out the Shared Source Common Language Infrastructure (aka SSCLI) from here http://www.microsoft.com/en-us/download/details.aspx?id=4917.

Once you've extracted the files, you will find in gcEE.ccp ("garbage collection execution engine") this:

#define FINALIZER_TOTAL_WAIT 2000

wich defines this famous default value of 2 seconds. You will also in the same file see this:

BOOL GCHeap::FinalizerThreadWatchDogHelper(){    // code removed for brevity ...    DWORD totalWaitTimeout;    totalWaitTimeout = GetEEPolicy()->GetTimeout(OPR_FinalizerRun);    if (totalWaitTimeout == (DWORD)-1)    {        totalWaitTimeout = FINALIZER_TOTAL_WAIT;    }

That will tell you the Execution Engine will obey the OPR_FinalizerRun policy, if defined, which correspond to the value in the EClrOperation Enumeration. GetEEPolicy is defined in eePolicy.h & eePolicy.cpp.