Blackboard Perspectives and thoughts from Zhenhua Yao

Copy app.config in the project reference

This is a note of how to ensure the app.config file is copied along with assemblies for the project reference in managed project.

Problem: suppose we have the two C# projects A and B, project A is referenced in B.

  • Project A
    • app.config
    • bin
      • A.dll
      • A.dll.config
  • Project B
    • app.config
    • bin
      • B.dll
      • B.dll.config
      • A.dll

When the project B is built, project A assembly will be copied to the output path but the app.config will not follow.

Solution: build the project with MSBuild with options “/fl /v:d”, read MSBuild.log and analyze when the output of project A is copied to the output path of project B. We can see that it is the following parameters and target in file %windir%\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets that control what to be copied and how:

    <!--
    These are the extensions that reference resolution will consider when looking for files related
    to resolved references.  Add new extensions here if you want to add new file types to consider.
    -->
    <AllowedReferenceRelatedFileExtensions Condition=" '$(AllowedReferenceRelatedFileExtensions)' == '' ">
        .pdb;
        .xml
    </AllowedReferenceRelatedFileExtensions>

...

    <ResolveAssemblyReference
        Assemblies="@(Reference)"
        AssemblyFiles="@(_ResolvedProjectReferencePaths);@(_ExplicitReference)"
        TargetFrameworkDirectories="@(_ReferenceInstalledAssemblyDirectory)"
        InstalledAssemblyTables="@(InstalledAssemblyTables);@(RedistList)"
        IgnoreDefaultInstalledAssemblyTables="$(IgnoreDefaultInstalledAssemblyTables)"
        IgnoreDefaultInstalledAssemblySubsetTables="$(IgnoreInstalledAssemblySubsetTables)"
        CandidateAssemblyFiles="@(Content);@(None)"
        SearchPaths="$(AssemblySearchPaths)"
        AllowedAssemblyExtensions="$(AllowedReferenceAssemblyFileExtensions)"
        AllowedRelatedFileExtensions="$(AllowedReferenceRelatedFileExtensions)"
...
        &lt;Output TaskParameter="CopyLocalFiles" ItemName="ReferenceCopyLocalPaths"/>
...
    </ResolveAssemblyReference>

Therefore the solution is to put the following lines in csproj file in project B at the beginning of <PropertyGroup>:

...
<PropertyGroup>
  <AllowedReferenceRelatedFileExtensions>
      .pdb;
      .xml;
      .exe.config;
      .dll.config
  </AllowedReferenceRelatedFileExtensions>
...

With the change, app.config will follow the assemblies in both the build and unit test.

Autologon and run GUI programs remotely in desktop session

This is a note of how to automate the remote execution of a GUI program on a test machine.

Note: you should not use this against a non-test machine because the autologon may impose a security risk.

Scenario: there is a test machine “ZY” with Windows Server 2008R2 installed, and I want to automate the process of running a GUI program on it, the program itself is fully automated but cannot run in session 0.

The basic idea is to use scheduled task to run the program in the login session. PSEXEC will not work because the process started from session 0 by NT services will be running in session 0, thus no GUI will show up. In order to ensure the program can be started after scheduled task is created, the given user must be logged on to the computer. This can be done by automatic logon.

Step 1: Turn on automatic logon

Run the following commands to change the registry values on ZY to turn on autologon:

reg add \\zy\HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon /v DefaultUserName /d [MyUserName] /f 
reg add \\zy\HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon /v DefaultPassword /d [MyPassword] /f 
reg add \\zy\HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon /v DefaultDomainName /d [MyDomain] /f
reg add \\zy\HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon /v AutoAdminLogon /d 1 /f

For more information please read http://support.microsoft.com/kb/315231

Step 2: Create scheduled task

Suppose the program we want to run is C:\Windows\System32\notepad.exe. The credential (must be identical with step 1) is: CONTOSO\MyUser, password is MyPassword. The the following command to create a task:

schtasks /Create /S zy /U CONTOSO\MyUser /P MyPassword /RU CONTOSO\MyUser /RP MyPassword /SC once /TN "notepad" /TR C:\Windows\System32\Notepad.exe /ST 14:50 /IT /F

Note that:

  • /ST must specify a future time. It is not important how exact the start time is, since we will use “schtasks /run” to run the command.
  • If the current user is the same as the remote computer where we want to run the command, options /U /P can be ignored.
  • It is important to use /IT in order to run the command interactively when the user is logged in.

Step 3: Reboot the machine

There are several ways to do this, for instance

shutdown /r /m \\zy /t 0

Step 4: Check if the computer is rebooted and user has logged in

This can be done by check if there is process running on Console (session 1) or user sessions (session > 1) with given user name:

tasklist /s zy /u CONTOSO\MyUser /p MyPassword /fi "session gt 0" /fi "username eq MyUser"

If explorer.exe and dwm.exe are listed, the login has completed.

Step 5: Run the task

We don’t have to wait until the task is started, instead we may start it immediately by:

schtasks /run /S zy /U CONTOSO\MyUser /P MyPassword /I /TN "notepad"

Note that the task name in /TN should be the same as the name in step 2.

Step 6: Delete the task

Run the following command to delete the scheduled task:

schtasks /delete /S zy /U CONTOSO\MyUser /P MyPassword /TN "notepad" /F

Step 7: Disable the autologon

Refer to commands in step 1 and replace with “reg delete” to delete DefaultPassword and change AutoAdminLogon to 0. 

Bypass access restrictions in managed code

For some reason I need to access a private field in a class in the system library. .NET provides a way to do it. I think it will be cool to share it although it is not a good coding practice that we will be using it on a regular basis.

Hypothetically suppose we want to construct an instance of System.ConfigNode. This class is an internal class in mscorlib, and the constructor is internal too. So we cannot just use it as usual.

Reflector

Fortunately we have System.Reflection. Firstly we load the assembly and find the type:

    var assembly = Assembly.Load("mscorlib, Version=2.0.0.0"); 
    var configNodeType = assembly.GetType("System.ConfigNode");

The we can construct an instance:

    var configNode = configNodeType.InvokeMember( 
                "", 
                BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Instance, 
                null, 
                null, 
                new object[] { "SampleConfigNode", null });

or

    var configNode = Activator.CreateInstance( 
        configNodeType, 
        BindingFlags.Instance | BindingFlags.NonPublic, 
        null, 
        new object[] { "SampleConfigNode", null }, 
        null);

Now we can change a private field:

configNodeType.InvokeMember( 
    "m_value", 
    BindingFlags.NonPublic | BindingFlags.SetField | BindingFlags.Instance, 
    null, 
    configNode, 
    new object[] { "new value" });

or call a private method:

    configNodeType.InvokeMember( 
        "AddAttribute", 
        BindingFlags.NonPublic | BindingFlags.InvokeMethod | BindingFlags.Instance, 
        null, 
        configNode, 
        new object[] { "some key", "some value" }); 

BTW, the format for the code is horrible. I need to find a better way to handle this.

Thought on managed code injection and interception

Sometimes code injection/interception will be useful for legitimate reasons. One scenario is fault injection, where we want to introduce faults to certain code paths in particular exception handling path which might rarely be reached otherwise. Another scenario is to change the behavior of .NET system libraries. Recently I was thinking if I could use an alternative app.config to override the appSettings section of an application. I have source code, but it would be cool if there is a non-intrusive way to do this. In both scenarios, code injection is a viable technique to solve the problem quickly.

For native code, Microsoft Detours is a very useful and popular tool to intercept Win32 APIs. It rewrites the code in memory for the target function being intercepted with custom code, and preserves the original code before the instrumentation. By doing this it is possible to extend the functions or completely changes its behaviors.

For managed code, I am not aware of similar tool as Detours. A few websites provide a clue to solve this problem. Most notable one is A More Complete DLL Injection Solution Using CreateRemoteThread on The Code Project and a blog article by Damian. It is a similar or in some sense an extended approach of Three Ways to Inject Your Code into Another Process. Basically this approach (essentially native one) works as follows:

  1. Opens the target process and gets the HANDLE (OpenProcess).
  2. Allocates a virtual memory in the target process for storing the file name of DLL to be injected (VirtualAllocEx).
  3. Writes the DLL file name into the virtual memory in the target process (WriteProcessMemory).
  4. Starts a thread in the target process which calls LoadLibrary to the DLL name (CreateRemoteThread).
  5. The DllMain function starts to take control and do appropriate things, including attaching to the first .NET domain in the process, and loading a managed assembly.

It is also possible to write the code directly to the process memory and start a thread from there. Nevertheless it is a lot of work, and many things can go wrong. Since WriteProcessMemory and CreateRemoteThread are designed for debugging native applications, there must be better way to do this. By accident I saw a project on http://www.codeplex.com/ named TestApi developed by fellow MSFTees. It has a fault injection engine for the managed code and this approach is much more elegant. Essentially it leverages CLR profiling interface to perform code injection/interception. In high level the approach is like:

  1. Gets ICLRProfiling COM interface from mscoree.dll and calls AttachProfiler method to attach custom profiler to the target process. The profiler is an inproc COM server with ICorProfilerCallback interface.
  2. In the callback, two methods are interesting: JITCompilationStarted and JITCompilationFinished. The former notifies the profiler a function is going to be compiled. As the documentation says, at this point it is possible to replace the MSIL (Microsoft intermediate language) code for the method by calling SetILFunctionBody.
  3. The functions for getting and setting IL function body are in ICorProfilerInfo interface.
  4. To translate the FunctionID to required parameters in SetILFunctionBody, one can use GetFunctionInfo to get the module/method IDs.

More detailed information on the profiler attach and detach is provided on MSDN at this page. There is also a TechNet article: Rewrite MSIL Code on the Fly with the .NET Framework Profiling API. MSDN Achieve also provides a sample code to attach a profiler: CLR V4 Profiling API Attach Trigger Sample. At some point, I might give it a try and see how it works.

Scenarios, features, and functional requirements

In the last post Shift focus to scenario testing, I described what a scenario is. In this post, I would like to think more on features and functional requirements.

The Institute of Electrical and Electronics Engineers (IEEE) has several standards on software documentation. Among these standards, IEEE 829-1998 “Standards for Software Test Documentation” specifies the form of documents for the various stages of software testing:

  • Test plan
  • Test design spec
  • Test case spec
  • Test procedure spec
  • Test item transmittal report
  • Test log
  • Test incident report
  • Test summary report

In this standard, the term feature is defined as “a distinguish characteristic of a software item (e.g. performance, portability, or functionality). Often times, features refer to the functional capabilities of a program, and then also called functions. In the context of functional requirement, which describes functions of a software program and its components, function refers to inputs, behaviors, and outputs in a defined context. Functional requirements describes the functionalities that a system is supposed to accomplish, for instance, this program can do blah. There are also non-functional requirements, which often specify overall characteristics and imposes certain constraints on the system in terms of performance, portability, security, reliability, accessibility, etc.

In software testing, features and functional requirements are the basis of the component level testing. In the software development process, the requirements are gathered from users and stakeholders, the high-level architecture of the product is defined, the end-to-end scenarios are compiled, then the features are proposed to support those scenarios, and the functional requirements to be implemented are derived to ensure those scenarios can be performed by the users.

Obviously a feature or a functional requirement can serve multiple scenarios. Also It is important to always put them in the context of end-to-end scenario, understand why it’s need, and track it throughout the development life cycle. This is necessary to properly evaluate the customer experience. In many product groups that I know of, engineers categorize scenarios by their priorities:

  • P0: cannot ship without them
  • P1: must have
  • P2: nice to have

Each scenario is further broken down to features/requirements by their priorities. The idea is that approaching to the end of development cycle, if the team is lack of resource to complete all features, low priority features will be cut, and low priority scenarios may be dropped. Personally I have not seen any team is not lack of resource, as a consequence many if not all P2 features are cut, P2 bugs discovered late get postponed to next release. In other words, the product works but bells and whistles are gone. There are also practices that the features/requirements are analyzed and put into the system on an isolated basis, without the context of meaningful customer scenario. Engineers are passionate to deliver a “feature-rich” product, while the customers feel confused and show little appreciation.

We must change this. A good tester needs to put things in context and represent customers’ interests at all times. Ultimately software testing is not just exercise the product in different ways and find as many bugs as possible. The real purpose is to evaluate the customer experience and make an appropriate tradeoff among quality, cost, and time to market. In the new era of software testing, test team needs to push the quality upstream and downstream, in order to delight our customers and grow our business.