Welcome!

Dan Appleman

Subscribe to Dan Appleman: eMailAlertsEmail Alerts
Get Dan Appleman via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Article

Testing Code Access Security

Testing Code Access Security

One of the most serious flaws of COM and API-based software development is that once you allow a component to run on your system, it has unrestricted permission to do anything. That's why viruses are such a problem - once they are on your system, there is little to protect you from their actions.

With .NET, Microsoft has introduced something called code access security (CAS). Code access security allows fine-grained control over what code is allowed to do. For example: .NET software downloaded from the Internet can't make API calls, can't access your registry - can't even read and write files! Even software running on a network share can't make API calls by default.

In the long term, this is likely to bring about a return to thick-client applications: Why limit yourself to a Web page interface when you can, with one click, safely download and run a rich Windows Forms-based application? It's safe because the .NET runtime allows that program to perform only operations that you permit. In the short term, it means you should already be developing with code access security in mind, if only to make sure your program or component will run in an Intranet environment.

Developing with Security
Now turn this around and look at it from the developer's perspective.

When developing COM or API-based applications, you knew that once your software was installed on a system, it had access to the full resources of that system. Yes, you may have had to worry about security with regard to the account running the software, but only some system resources were protected in that manner.

But with .NET, this changes. It is very possible that the system on which your software is running will restrict what your program can do. When a program is restricted in this manner it is said to be running in partial trust. Nowadays most .NET applications require full trust to run. This means that they either have to be installed on the local system, or the local system must set its policy to allow that application to run. But this is a limited solution, as most clients won't want to change their policies (and may not know how to do so). And if they do, they will want to grant the minimum rights required for your program to run - not full trust (because not only does full trust grant your program the right to do anything on their system, it potentially allows other programs to use your program maliciously to invade their system if you have flaws in your security programming).

To address this, you can take two approaches (or a combination of both):
1.  You can determine what privileges your application actually needs to run, then request only those permissions - notifying the client about the minimum permission set needed and having them configure their system accordingly.
2.  You can detect every place in your program where a potential code access security failure can occur, and add appropriate error handling so your code can degrade gracefully rather than crashing with a security exception.

If you are creating a component to be used by other applications, you can get even more sophisticated. You can create highly trusted or fully trusted components that can be called by lower trusted applications (such as those downloaded from the Internet) and provide those applications with capabilities that would normally be disallowed. To do so you also need to identify every place in your component where a security failure can occur.

Determining CAS Requirements
When we recently started work on a new line of software (www.5MinuteSoftware.com), we wanted to make sure that our components worked well with code access security. Some of the components are intended to run in full trust but can be called by downloaded applications. Others are intended to run in partial trust, and we wanted to know exactly which security permissions were required for them to run.

Unfortunately, figuring out which security permissions a .NET assembly needs turned out to be a great challenge. It isn't enough to just try a program in the Internet or an intranet zone, because the definition of those zones can vary from machine to machine and between versions of .NET. In addition, users can create their own zone and assign software to it at will.

To address this we created a tool that allowed us to test a .NET assembly under any number of configurations. Not only did we want to test policies such as Internet or intranet, we wanted to test individual permissions. For example, what happens when you deny permission to make API calls? What happens when you deny permission to write to the system directory? This tool evolved into CAS/Tester, an automatic code access security testing tool. CAS/Tester is a runtime tool - it literally hosts your .NET assembly or application and executes it multiple times in any way you define. Each test is done under a different security context, which you also can define.

A CAS/Tester Case Study
Because CAS/Tester was originally created for our own use, it's no surprise that our first case study is our own. Our new Desaware Licensing System provides machine-based licensing for .NET programs. An important feature of the product is the ability to license partially trusted code. This means that people will be able to create applications for download and execution via the Internet or an intranet, and still be able to license those applications on the target system. To do this we provide a client license component that itself must be installed in full trust on the client system. Once installed, it can perform the necessary licensing tasks even if the licensed program is not fully trusted.

To test this component, we simply ran CAS/Tester and instructed it to call the functions that install and verify licenses. CAS/Tester performed the default test, which consists of restricting over 80 individual permissions one by one. During the first test we discovered the error shown in Figure 1.

 

The CAS/Tester report told us that when our assembly was denied access to the SerializationFormatter permission, a failure occurred. Because we were testing a debug build of the assembly, it proceeded to provide a stack trace indicating exactly where the error occurred. We were then able to go in and perform the necessary Assert to notify the runtime that this assembly can safely perform serialization in that particular function (making sure, of course, that there is no way someone can call that function in a way that would potentially be dangerous to the client system). After modifying the code, we ran the test again with the results shown in Figure 2.

 

A Simple Example:
You can try out another scenario for yourself. Consider a simple Windows application that at some point retrieves some environment information about the current system, either to display to the user or for some other purpose such as licensing or diagnostics. You might use code like this (or its C# equivalent) in a form load event:

lblInfo.Text = "User: " & Environment.UserName & ControlChars.CrLf & _
"Domain : " & Environment.UserDomainName & ControlChars.CrLf & _
"Machine: " & Environment.MachineName

This is fairly innocuous code, and it's hard to imagine any scenario where it would not work. What could be more basic than reading environment variables?

When you run a CAS/Tester full report on this, you might be surprised by the results: 13 security errors occur. Some of them are obvious - the tests that deny UI permission all fail. But you'll also find that the SecurityPermission and EnvironmentPermission tests fail, as shown in the report (see Table 1).

 

Worse, the default intranet zone test fails. In other words, a simple diagnostic feature that shows a few environment variables will cause your software to fail with an exception when run across a network share!

Now, you may not be concerned about every test or scenario, but it's not uncommon to run software across a network share, so you'll probably want to resolve this issue. A close look at the CAS/Tester report for the intranet zone test (see Table 2) shows that this is the default permission for that zone.

 

In other words, in the intranet zone the default permission set lets you look at only the user name. The EnvironmentPermission test shown earlier told you what line of code the problem was on. This suggests how you might change the code (see Listing 1).

If the first attempt to load the label control fails, the program tries just the user name. If that fails, the program displays a notice that there is insufficient permission to display the requested information. You could just disable this feature entirely, but there are a number of other approaches you could take. You could determine the current settings and request only information that you know you are allowed to view. You could perform a security demand before attempting a secured operation. But this is a fairly straightforward solution that has a minimal performance impact when run in full trust.

A second CAS/Tester test on this modified code shows that the application now runs in the intranet zone, and no longer raises SecurityPermission- or EnvironmentPermission-based errors.

This illustrates a common approach for using CAS/Tester during the development cycle. It's an iterative process where you run a complete test, then modify the code wherever an error is detected that concerns you. Some errors you won't care about - in this case if your program doesn't have permission to bring up a form, you want the security exception to occur. But used properly, CAS/Tester will not only help you build security handling into your code, it will give you confidence as to how your code will behave under virtually every client scenario.

Conclusion
CAS/Tester was initially developed as a development tool - to help developers find and address potential security errors for assemblies that either run in partial trust or are called by other partially trusted applications. However, we've found it excels also as a QA tool. The CAS/Tester output is actually an XML file - the report you see here is an XSL transform of the output (and yes, you can customize the XSL to develop your own reports). This makes it easy to integrate CAS/Tester into a QA program, to document tests, and to perform regression tests to make sure that later updates don't introduce changes to the security requirements of a .NET assembly.

More Stories By Dan Appleman

Daniel Appleman is president of Desaware, Inc., a developer of add-on products and components for Microsoft Visual Studio. He is a cofounder of Apress, and the author of numerous books, including Moving to VB.NET: Strategies, Concepts and Code; How Computer Programming Works; Dan Appleman's Visual Basic Programmer's Guide to the Win32 API; Always Use Protection: A Teen's Guide to Safe Computing, and a series of e-books on various technology and security topics. (A complete list can be found at www.desaware.com.)

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Peter da Silva 10/28/03 12:47:44 PM EST

Remember that this was Microsoft's big argument about Java... running code in a sandbox was unnecessary because you could sign applets so you know were they come from, and you could avoid the overhead of the sandbox.

Now "And if they do, they will want to grant the minimum rights required for your program to run - not full trust (because not only does full trust grant your program the right to do anything on their system, it potentially allows other programs to use your program maliciously to invade their system if you have flaws in your security programming)."

Man, this is half the reason we banned IE and Outlook and related applications here back in the mid '90s. That, and the way Microsoft forces the HTML control to guess as whether to trust a document based on where it thinks the application that presented it got it from. The latest variant of that just showed up on my desk... apparently window.open("file:javascript:eval('code')"); is allowed to write to %systemroot%. Wheee!

Well, they've got a sandbox now... albeit one that seems to take Sun's kinda scary code analysis approach just that little bit further, now they just need to make it mandatory for anything that can't be reliably traced back to known safe content. One of these days maybe we can allow IE for non-intranet content...