This is a note of how to ensure the app.config file is copied along with assemblies for the project
reference in managed project.
Problem: suppose we have the two C# projects A and B, project A is referenced in B.
Project A
app.config
…
bin
A.dll
A.dll.config
Project B
app.config
…
bin
B.dll
B.dll.config
A.dll
When the project B is built, project A assembly will be copied to the output path but the app.config
will not follow.
Solution: build the project with MSBuild with options “/fl /v:d”, read MSBuild.log and analyze when
the output of project A is copied to the output path of project B. We can see that it is the following
parameters and target in file %windir%\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets
that control what to be copied and how:
<!--
These are the extensions that reference resolution will consider when looking for files related
to resolved references. Add new extensions here if you want to add new file types to consider.
-->
<AllowedReferenceRelatedFileExtensions Condition=" '$(AllowedReferenceRelatedFileExtensions)' == '' ">
.pdb;
.xml
</AllowedReferenceRelatedFileExtensions>
...
<ResolveAssemblyReference
Assemblies="@(Reference)"
AssemblyFiles="@(_ResolvedProjectReferencePaths);@(_ExplicitReference)"
TargetFrameworkDirectories="@(_ReferenceInstalledAssemblyDirectory)"
InstalledAssemblyTables="@(InstalledAssemblyTables);@(RedistList)"
IgnoreDefaultInstalledAssemblyTables="$(IgnoreDefaultInstalledAssemblyTables)"
IgnoreDefaultInstalledAssemblySubsetTables="$(IgnoreInstalledAssemblySubsetTables)"
CandidateAssemblyFiles="@(Content);@(None)"
SearchPaths="$(AssemblySearchPaths)"
AllowedAssemblyExtensions="$(AllowedReferenceAssemblyFileExtensions)"
AllowedRelatedFileExtensions="$(AllowedReferenceRelatedFileExtensions)"
...
<Output TaskParameter="CopyLocalFiles" ItemName="ReferenceCopyLocalPaths"/>
...
</ResolveAssemblyReference>
Therefore the solution is to put the following lines in csproj file in project B at the beginning of
<PropertyGroup>:
This is a note of how to automate the remote execution of a GUI program on a test machine.
Note: you should not use this against a non-test machine because the autologon may impose a security risk.
Scenario: there is a test machine “ZY” with Windows Server 2008R2 installed, and I want to automate the
process of running a GUI program on it, the program itself is fully automated but cannot run in session 0.
The basic idea is to use scheduled task to run the program in the login session. PSEXEC will not work
because the process started from session 0 by NT services will be running in session 0, thus no GUI will show
up. In order to ensure the program can be started after scheduled task is created, the given user must be logged
on to the computer. This can be done by automatic logon.
Step 1: Turn on automatic logon
Run the following commands to change the registry values on ZY to turn on autologon:
For more information please read http://support.microsoft.com/kb/315231
Step 2: Create scheduled task
Suppose the program we want to run is C:\Windows\System32\notepad.exe. The credential (must be identical with step 1)
is: CONTOSO\MyUser, password is MyPassword. The the following command to create a task:
For some reason I need to access a private field in a class in the system library. .NET provides a way to do it.
I think it will be cool to share it although it is not a good coding practice that we will be using it on a regular
basis.
Hypothetically suppose we want to construct an instance of System.ConfigNode. This class is an internal class in
mscorlib, and the constructor is internal too. So we cannot just use it as usual.
Fortunately we have System.Reflection. Firstly we load the assembly and find the type:
The we can construct an instance:
or
Now we can change a private field:
or call a private method:
BTW, the format for the code is horrible. I need to find a better way to handle this.
Sometimes code injection/interception will be useful for legitimate reasons. One scenario is fault injection, where we want to introduce faults to certain code paths in particular exception handling path which might rarely be reached otherwise. Another scenario is to change the behavior of .NET system libraries. Recently I was thinking if I could use an alternative app.config to override the appSettings section of an application. I have source code, but it would be cool if there is a non-intrusive way to do this. In both scenarios, code injection is a viable technique to solve the problem quickly.
For native code, Microsoft Detours is a very useful and popular tool to intercept Win32 APIs. It rewrites the code in memory for the target function being intercepted with custom code, and preserves the original code before the instrumentation. By doing this it is possible to extend the functions or completely changes its behaviors.
Opens the target process and gets the HANDLE (OpenProcess).
Allocates a virtual memory in the target process for storing the file name of DLL to be injected (VirtualAllocEx).
Writes the DLL file name into the virtual memory in the target process (WriteProcessMemory).
Starts a thread in the target process which calls LoadLibrary to the DLL name (CreateRemoteThread).
The DllMain function starts to take control and do appropriate things, including attaching to the first .NET domain in the process, and loading a managed assembly.
It is also possible to write the code directly to the process memory and start a thread from there. Nevertheless it is a lot of work, and many things can go wrong. Since WriteProcessMemory and CreateRemoteThread are designed for debugging native applications, there must be better way to do this. By accident I saw a project on http://www.codeplex.com/ named TestApi developed by fellow MSFTees. It has a fault injection engine for the managed code and this approach is much more elegant. Essentially it leverages CLR profiling interface to perform code injection/interception. In high level the approach is like:
Gets ICLRProfiling COM interface from mscoree.dll and calls AttachProfiler method to attach custom profiler to the target process. The profiler is an inproc COM server with ICorProfilerCallback interface.
In the callback, two methods are interesting: JITCompilationStarted and JITCompilationFinished. The former notifies the profiler a function is going to be compiled. As the documentation says, at this point it is possible to replace the MSIL (Microsoft intermediate language) code for the method by calling SetILFunctionBody.
The functions for getting and setting IL function body are in ICorProfilerInfo interface.
To translate the FunctionID to required parameters in SetILFunctionBody, one can use GetFunctionInfo to get the module/method IDs.
More detailed information on the profiler attach and detach is provided on MSDN at this page. There is also a TechNet article: Rewrite MSIL Code on the Fly with the .NET Framework Profiling API. MSDN Achieve also provides a sample code to attach a profiler: CLR V4 Profiling API Attach Trigger Sample. At some point, I might give it a try and see how it works.
In the last post Shift focus to scenario testing, I described what a scenario is. In this post, I would like to think more on features and functional requirements.
The Institute of Electrical and Electronics Engineers (IEEE) has several standards on software documentation. Among these standards, IEEE 829-1998 “Standards for Software Test Documentation” specifies the form of documents for the various stages of software testing:
Test plan
Test design spec
Test case spec
Test procedure spec
Test item transmittal report
Test log
Test incident report
Test summary report
In this standard, the term feature is defined as “a distinguish characteristic of a software item (e.g. performance, portability, or functionality). Often times, features refer to the functional capabilities of a program, and then also called functions. In the context of functional requirement, which describes functions of a software program and its components, function refers to inputs, behaviors, and outputs in a defined context. Functional requirements describes the functionalities that a system is supposed to accomplish, for instance, this program can do blah. There are also non-functional requirements, which often specify overall characteristics and imposes certain constraints on the system in terms of performance, portability, security, reliability, accessibility, etc.
In software testing, features and functional requirements are the basis of the component level testing. In the software development process, the requirements are gathered from users and stakeholders, the high-level architecture of the product is defined, the end-to-end scenarios are compiled, then the features are proposed to support those scenarios, and the functional requirements to be implemented are derived to ensure those scenarios can be performed by the users.
Obviously a feature or a functional requirement can serve multiple scenarios. Also It is important to always put them in the context of end-to-end scenario, understand why it’s need, and track it throughout the development life cycle. This is necessary to properly evaluate the customer experience. In many product groups that I know of, engineers categorize scenarios by their priorities:
P0: cannot ship without them
P1: must have
P2: nice to have
Each scenario is further broken down to features/requirements by their priorities. The idea is that approaching to the end of development cycle, if the team is lack of resource to complete all features, low priority features will be cut, and low priority scenarios may be dropped. Personally I have not seen any team is not lack of resource, as a consequence many if not all P2 features are cut, P2 bugs discovered late get postponed to next release. In other words, the product works but bells and whistles are gone. There are also practices that the features/requirements are analyzed and put into the system on an isolated basis, without the context of meaningful customer scenario. Engineers are passionate to deliver a “feature-rich” product, while the customers feel confused and show little appreciation.
We must change this. A good tester needs to put things in context and represent customers’ interests at all times. Ultimately software testing is not just exercise the product in different ways and find as many bugs as possible. The real purpose is to evaluate the customer experience and make an appropriate tradeoff among quality, cost, and time to market. In the new era of software testing, test team needs to push the quality upstream and downstream, in order to delight our customers and grow our business.